patent_id
stringlengths
7
8
description
stringlengths
125
2.47M
length
int64
125
2.47M
11857788
DETAILED DESCRIPTION The present invention relates to controlling (treating and/or preventing) bleeding in a patient. More specifically, this disclosure is related to apparatuses (devices, systems, and methods) for controlling bleeding and controlling (reducing) bleed time in a patient through neural stimulation, such as through electrical and/or mechanical and/or other stimulation of both (e.g., simultaneously) the trigeminal nerve and the vagus nerve. Controlling bleeding may include preventing and/or treating bleeding (e.g., surgical bleeding, traumatic bleeding, bleeding related to childbirth, bleeding related to other medical procedures or conditions, bleeding mediated or increased by anticoagulants, inherited or acquired bleeding disorders such as hemophilia, and so forth). “Treatment” as used herein includes prophylactic and therapeutic treatment. “Prophylactic treatment” refers to treatment before onset of a condition (e.g., bleeding, an inflammatory condition, etc.) is present, to prevent, inhibit or reduce its occurrence. As used herein, a patient or subject may be any animal, preferably a mammal, including a human, but can also be a companion animal (e.g., a cat or dog), a farm animal (e.g., a cow, a goat, a horse, a sheep) or a laboratory animal (e.g., a guinea pig, a mouse, a rat), or any other animal. “Bleed time” or “bleeding time” as used herein refers to the length of time it takes to for bleeding to stop. In general, it is controlled or influenced by how well blood platelets work to form a platelet plug. Bleed time is generally increased by the administration of anticoagulant, such as aspirin, heparin, and warfarin. As used herein, the terms “reduce” or “reducing” when referring to bleed time in a subject, encompass at least a small but measurable reduction in bleed time over non-treated controls. Reduction may be at least 5%, at least 10%, at least 20%, at least 30%, at least 40%, at least 50%, at least 60%, or more than 60% or anything in between these ranges. For example, a value between these ranges may be chosen so as to use a protocol or apparatus configured to reduce bleeding while minimizing side effects due to applied trigeminal and vagus nerve stimulation. The nervous system controls nearly every cell and organ in the body through electrical signals carried by nerves. Such electrical connections allow the nervous system to monitor for tissue injury and then to initiate a healing process. Described herein are apparatuses and methods configured for harnessing such electrical connections via targeted electrical nerve stimulation to effectively treat a variety of conditions. Combined vagus and trigeminal nerve stimulation (VNS/TNS) as described herein is a method to reduce bleeding or bleed time following tissue injury or other bleeding event. Combined vagus and trigeminal nerve stimulation (VNS/TNS) as described herein may be non-invasive or minimally invasive. In some examples, VNS/TNS may be a non-invasive or minimally invasive method to activate the vagus nerve and previously described Neural Tourniquet. The combination of vagus nerve stimulation and trigeminal nerve stimulation may reduce the amount of one or both vagus and trigeminal nerve stimulation necessary for robust reduction of bleed time. “Combined” vagus and trigeminal nerve stimulation (“VNS/TNS”) may refer to the simultaneous (e.g., at the same time), overlapping or near-overlapping (e.g., within about 10 seconds or less, e.g., within 9 sec or less, 8 sec or less, 7 sec or less, 5 sec or less, 2 sec or less, 1 second or less, 0.5 seconds or less, etc.) vagus and trigeminal stimulation. “Non-invasive stimulation” typically means stimulation that does not require a surgery, exposure of the nerve fiber or direct contact with the nerve fiber. As used herein, “non-invasive stimulation” also does not include administration of pharmacological agents. For example, non-invasive trigeminal nerve stimulation can be achieved, for example, by mechanical (e.g., vibration) or electrical (e.g., electromagnetic radiation) means applied externally to the subject. Similarly non-invasive vagus nerve stimulation may be achieved, for example, by electrical or mechanical (e.g., vibration) stimulation applied externally (e.g., to the auricular region of the ear, over the auricular branch of the vagus nerve. Although in some examples, a non-invasive or minimally invasive approach as described herein may be used in conjunction with a pharmacological approach (e.g., for an additive or a synergistic benefit), in general an approach described herein may be more efficacious, safer, and less costly than traditional pharmacological therapies. Advantages of this method over pharmacological approaches may include higher specificity, fewer side effects, lower costs, and improved compliance. Advantages over implantable pulse generators for chronic nerve stimulation applications may include avoidance of surgery and associated complications, both for the initial procedure and subsequent procedures for battery changes, and lower costs. The trigeminal nerve (cranial nerve V) is the largest of the cranial nerves, and has three different branches or nerve distributions (V1, V2, V3; also referred to as the ophthalmic nerve, maxillary nerve and mandibular nerve, respectively) that converge on the trigeminal ganglion. The trigeminal nerve is paired and present on both sides of the body. The trigeminal nerve relays sensory (and motor) information from the head and face. Trigeminal nerve stimulation (TNS) is thought to activate multiple structures in the brain and brainstem, such as the locus coeruleus (LC) and nucleus tractus solitarius (NTS).FIG.1shows a schematic of the different skin regions corresponding to the different branches of the trigeminal nerve. The vagus nerve (cranial nerve X) is the longest of the cranial nerves, extending from the brainstem down into the peritoneal cavity. The vagus nerve is the main parasympathetic output of the autonomic nervous system, and interfaces with nearly every organ of the thorax and abdomen, including the heart, lungs, liver, and spleen. Vagus nerve stimulation (VNS) is clinically approved for the treatment of medically refractory epilepsy and depression. Activation of the LC and NTS appears important to the antiepileptic effects of VNS. To date, more than 100,000 patients have received VNS. Technological advances may allow for nerve stimulation without surgical implantation of a pulse generator. For example, transcutaneous auricular stimulation demonstrates anticonvulsive effects similar to invasive VNS. Direct electrical stimulation of the cervical vagus nerve significantly shortens the duration of bleeding and decreases total blood loss during tissue trauma in swine. Rotational thromboelastography (RoTEG) revealed that VNS significantly shortens the reaction (r) time of blood to initiate clot formation. Moreover, VNS significantly increases thrombin generation at the injury site, whereas systemic thrombin production remains unchanged. Taken together, VNS improves hemostasis by accelerating clot formation specifically at the site of tissue injury. As described herein VNS/TNS (combined vagus and trigeminal stimulation) may include activating the trigeminal nerve (e.g., by electrical or mechanical or other stimulation) and activating the vagal nerve directly. For example, the vagus nerve may be activated directly in combination with trigeminal nerve activation. Thus a step of controlling bleeding or activating the trigeminal nerve may include a step of directly activating the vagal nerve. Activating the trigeminal nerve may include activating the cholinergic anti-inflammatory pathway and/or any other steps to control bleeding or bleed time in a subject as described in U.S. Pat. No. 8,729,129, while concurrently directly stimulating the vagus nerve. The vagal nerve may be activated either directly or indirectly. In some particular examples, activating the vagus nerve-mediated reduced bleed-time safely and efficaciously may be through stimulation of the trigeminal nerve and the vagus nerve, utilizing precise and specific electrical stimulation parameters. Trigeminal nerve and vagus nerve stimulation may include improving hemostasis via accelerated clot formation such as at the site of tissue injury. This may lead to less blood loss and a shorter duration of bleeding following tissue trauma and hemorrhage. FIG.1shows one example of a schematic for a system configured for the combined stimulation of the vagus and trigeminal nerves. InFIG.1, the system may include a controller101that may include control logic and/or circuitry for driving combined stimulation of the vagus nerve using a vagus nerve stimulation output107and a trigeminal nerve stimulation output. InFIG.1, the trigeminal nerve is shown with three alternatively stimulation outputs105,109,113. One or more branch of the trigeminal nerve may be stimulated by the system; for example, inFIG.1, the V1branch of the trigeminal nerve may be mechanically or electrically stimulated by a stimulation output105of the system. The V2branch of the trigeminal nerve may be mechanically or electrically stimulated by a stimulation output113of the system. The V3branch of the trigeminal nerve may be mechanically or electrically stimulated by a stimulation output109of the system. For example, one or more electrodes configured to contact the patient's skin over the V1, V2or V3branch of the trigeminal nerve may be included. Electrical stimulation may be applied (e.g., pulsed electrical stimulation of between 1-4 kHz at a current of between 0.1 mA to 100 mA). Similarly, the system may include a vagus nerve stimulator107that may be connected to the controller101to drive stimulation of the vagus nerve. The vagus nerve stimulation may be mechanical stimulator (e.g., configured to apply mechanical force/pressure to the vagus nerve from outside of the body, e.g., by applying against the patient's ear) or an electrical stimulator. For example, the electrical vagus nerve stimulators may apply electrical stimulation from one or more electrodes on the surface of the patient's skin (e.g., the auricular region of the ear). In some variations the electrode may be one or more tissue penetrating electrodes (e.g., needles) inserted into the skin. In either the vagus or trigeminal stimulators, the apparatus may include a patch (e.g., patch electrode) for contacting part of the body (e.g., head, ear, face, etc.) of a subject and delivering a pulse and a stimulator for providing an electrical stimulus to be delivered through the patch. Any appropriate electrical or mechanical stimulation may be applied. For example, when applying electrical stimulation to the trigeminal nerve (e.g., through the face), a voltage stimuli (e.g., between 0.2 V to 5 V, at between 0.1-50 Hz, between 0.1 ms and 5 ms pulse width, monophasic and/or biphasic) may be applied for x min (e.g., where x is 2 min, 5 min, 10 minutes, 20 minutes, or 30 minutes, etc.) duration. Vagus nerve stimulation may be applied at approximately or exactly the same time. For example, one complete operational cycle (“dose”) may include a 0.2 V-5 V monophasic pulses (e.g., sinusoidal, rectangular, etc. pulses) for a burst duration that is continuous or repeating, with pulses having a duration of between 0.1 ms and 10 ms (e.g., 2 milliseconds). This cycle may be repeated at a repetition rate of between about 0.1 Hz and 1000 Hz (e.g., 30 Hz) for a treatment duration of between 1 min and 40 min (e.g., 10 minutes, 20 minutes, or 30 minutes, etc.). Concurrently stimulation of the vagus nerve may be applied, e.g., through the ear. For example, stimulation of between about 0.1-10 V, 0.1-10 mA, pulsed, e.g., rectangular pulses, for a burst duration that is continuous or repeating, with pulses having a duration of between 0.1 ms and 10 ms (e.g., 2 milliseconds). This cycle may be repeated at a repetition rate of between about 0.1 Hz and 1000 Hz (e.g., 30 Hz) for a treatment duration of between 1 min and 40 min (e.g., 10 minutes, 20 minutes, or 30 minutes, etc.). The pattern of concurrent stimulation for the vagus and the trigeminal may be arranged in a variety of different ways. For example,FIGS.2A-2Hillustrate variations of the combined vagus and trigeminal stimulation. InFIG.2A, the vagus and trigeminal nerve are stimulated at the same time (e.g., same start and stop). This simulation may be identical in frequency (e.g., puling, etc.), and/or intensity (e.g., amplitude, burst duration, etc.). For example, in variations in which both vagus and trigeminal are stimulated electrically by pulsed electrical stimulation, the stimulation may occur at the same time, as shown (e.g., having the same start/stop). Alternatively, in some variations combined vagus nerve and trigeminal nerve stimulation to reduce bleed time may include first stimulating the trigeminal nerve followed by stimulation of the vagus nerve, or by first stimulating the vagus nerve, followed by stimulation of the trigeminal nerve, as shown inFIG.2A. This alternating stimulation may be repeated for the entire dose duration. InFIG.2Athere is no significant gap between the vagus stimulation and the trigeminal stimulation; in some variations, as shown inFIG.2D, the combined vagus/trigeminal stimulation includes a gap217between the vagus and trigeminal nerve stimulation. As mentioned, this gap may be less than a few second (e.g., 10 seconds or less, 9 seconds or less, 8 seconds or less, 7 seconds or less, 6 seconds or less, 5 seconds or less, 4 seconds or less, 3 seconds or less, 2 seconds or less, 1 seconds or less, 0.5 seconds or less, etc.). The vagus/trigeminal nerve stimulation may therefore alternate and may be repeated for the entire dose duration. Alternatively, in some variations, as shown inFIG.2C, the combined vagus and trigeminal stimulation may include overlapping215stimulation of the trigeminal and vagus nerve, as shown. In any of these variations, vagus nerve stimulation may begin before trigeminal nerve stimulation (as shown) or in some variations, trigeminal nerve stimulation may begin before vagus nerve stimulation. In some variations, either vagus nerve stimulation or trigeminal nerve stimulation may be intermittent and overlap with constant stimulation of the trigeminal (when vagus is intermittent) or vagus (when trigeminal stimulation is intermittent). InFIG.2Ethe trigeminal nerve is stimulated continuously (although this may include pulsed or burst of pulses) while the vagus nerve stimulation is intermittent (e.g., turned “on” and “off” with an intermittence frequency) during the dose duration. In some variations, as shown inFIG.2F-2H, combined vagus and trigeminal nerve stimulation to reduce bleeding (e.g., reduce bleed time) may include both vagus nerve stimulation and trigeminal nerve stimulation being pulsed on/off at the same or different frequencies. InFIG.2F, the vagus nerve stimulation may be performed at an on/off frequency (intermittence frequency) that is different than the trigeminal nerve stimulation frequency; in this example the vagus nerve stimulation has a duty cycle of approximately 50%, while the trigeminal nerve stimulation has a duty cycle of >50% (e.g., >60%, approximately 75%). The vagus nerve stimulation may partially overlap with the trigeminal nerve stimulation during the dose duration, or may not. InFIG.2Gthe combined vagus and trigeminal stimulation to reduce bleeding may include alternating periods of vagus and trigeminal stimulation in which either the vagus nerve stimulation is on for longer than the trigeminal nerve stimulation or the trigeminal nerve stimulation is on for longer than the vagus nerve stimulation (as shown inFIG.2G). InFIG.2H, both trigeminal and vagus nerve stimulation are on for the same duration, and the trigeminal and vagus nerve stimulation ‘on’ times overlap. In general, the non-invasive stimulation described herein may be non-invasive electrical stimulation applied at a predetermined range of intensities and frequencies. However, other types of non-invasive stimulation may also be used (e.g. non-invasive mechanical stimulation) and can minimally invasive, subcutaneous stimulation. Non-invasive stimulation may be performed by one or more electrodes or actuators that do not contact the nerve. Electrical stimulation may be in the range of 10 mV to 5 V at a frequency of 0.1 Hz to 100 Hz, with a duration of stimulus between from 1 ms to 10 min. Mechanical stimulation may be oscillatory, repeated, pulsatile, or the like. In some variations the non-invasive stimulation may the repeated application of a mechanical force against the subject's skin at a predetermined frequency for a predetermined period of time. For example, the non-invasive mechanical stimulation may be a mechanical stimulation with a spectral range from 50 to 500 Hz, at an amplitude that ranges between 0.0001-5 mm displacement. The temporal characteristics of the mechanical stimulation may be specific to the targeted disease. In some variations the frequency of stimulation is varying or non-constant. The frequency may be varied between 50 and 500 Hz. In some variations the frequency is constant. In general the frequency refers to the frequency of the pulsatile stimulation within an “on period” of stimulation. Multiple stimulation periods may be separated by an “off period” extending for hours or even days, as mentioned above. The force with which the mechanical stimulation is applied may also be constant, or it may be variably. Varying the force and/or frequency may be beneficial to ensure that the mechanical stimulation is effective during the entire period of stimulation, particularly if the effect of non-invasive stimulation operates at least in part through mechanoreceptors such as the rapidly acclimating Pacinian corpuscles. In performing any of the therapies described herein, the non-invasive stimulation may be scheduled or timed in a specific manner. For example, a period of stimulation (“on stimulation”) may be followed by a period during which stimulation is not applied (“off period”). The off period may be much longer than the on period. For example, the off period may be greater than an hour, greater than two hours, greater than four hours, greater than 8 hours, greater than 12 hours, greater than 24 hours, or greater than 2 days. The on period is the duration of a stimulation (which may include a frequency component), and may be less than 10 minutes, less than 5 minutes, less than 2 minutes, less than 1 minute, etc. The ratio of the on period and the off period may partially determine the duty cycle of stimulation. In some examples, either one (e.g., left or right) of the two paired trigeminal nerves may be activated (e.g., unilateral activation). In some examples, the paired trigeminal nerves may be both be activated in a subject (e.g., bilateral activation). In some examples, part or all of the trigeminal nerve may be activated. For example, any one, two or three of the three different branches or nerve distributions (V1, V2, V3; also referred to as the ophthalmic nerve, maxillary nerve and mandibular nerve, respectively) may be activated. In some examples, sensory fibers of the trigeminal nerve are stimulated. Additionally, the trigeminal ganglion may also or instead be stimulated. Additionally or instead, associated neurons that are connected to the trigeminal nerve may be stimulated. Stimulation may be performed using one or more patches configured to cover part of the body each containing one or more electrodes (an array of 2, 3, 4, 5, 10, or more electrodes) configured to cover part of the body (e.g. cheek, forehead, head, neck, nose, scalp, etc.) in a position sufficient to provide stimulation one or more parts of a trigeminal nerve. Stimulation may be performed using one or more electrodes configured to be placed under the skin, such as in a muscle and 1, 2, 3, 4, 5, 10, or more electrodes) may be placed in a muscle. Also described herein are apparatuses (devices, systems, and methods) for activating the trigeminal nerve and the vagal nerve. In some embodiments, both the trigeminal nerve and the vagal nerve may be directly activated (e.g., by electrical, mechanical or other stimulation such as magnetic, thermal, etc.). Further, in some variations, the trigeminal stimulation described herein may not activate the dive reflex. The dive reflex in general can activated, for example, by submerging the body in cold water (and holding the breath) wherein the body overrides basic homeostatic functions. The dive reflex is a physiological adaptation that regulates respiration, heart rate, and arterial blood pressure in a particular way. Although all mammals control breathing, heart rate, and arterial blood pressure during their lives, these controls are strongly altered during diving and activation of the dive reflex. In general, trigeminal stimulation parameters may be chosen so as to not activate the dive reflex (e.g., trigeminal stimulation without inducing a dive reflex). Failure to induce a dive reflex may be failure to invoke a percentage change in heart rate and/or respiration and/or arterial blood pressure by more than a predetermined amount. For example, failure to induce a dive reflex may be failure to reduce one or more of heart rate and/or respiration and/or arterial blood pressure by greater than about 2%, 5%, 7%, 10%, 15%, 20%, 25%, 30%, 40%, etc. The apparatuses and methods described herein may be suitable for therapeutically or prophylactically treating subjects suffering from or at risk from suffering from unwanted bleeding from any cause such as: bleeding disorders including but not limited to afibrinogenemia, Factor II deficiency, Factor VII deficiency, fibrin stabilizing factor deficiency, Hageman Factor deficiency, hemophilia A, hemophilia B, hereditary platelet function disorders (e.g., Alport syndrome, Bernard-Soulier Syndrome, Glanzmann thrombasthenia, gray platelet syndrome, May-Hegglin anomaly, Scott syndrome, and Wiskott-Aldrich syndrome), parahemophilia, Stuart Power Factor deficiency, von Willebrand disease, thrombophilia, or acquired platelet disorders (such as those caused by common drugs: antibiotics, and anesthetics, blood thinners, and those caused by medical conditions such as: chronic kidney disease, heart bypass surgery, and leukemia), childbirth, injury, menstruation, and surgery. An unwanted bleeding treated using any of the apparatuses or methods described herein may include an internal hemorrhage or an external hemorrhage. An internal hemorrhage includes a hemorrhage in which blood is lost from the vascular system inside the body, such as into a body cavity or space. An external hemorrhage includes blood loss outside the body. EXAMPLES FIG.3Aillustrates one example of a combined trigeminal and vagus nerve stimulator for treating bleeding (e.g., for reducing bleed time) as described. InFIG.3A, the apparatus includes a housing that is configured or adapted to fit over, behind and at least partially into the patient's auricle region of the ear. The housing may include an ear retainer312for holding the device in/on the ear360, and may at least partially enclose a controller (e.g., control circuitry, a battery, power control circuitry, waveform generator, a trigeminal stimulation drive and vagus stimulation drive). The apparatus also includes a vagus stimulator307that is coupled to the housing in this example, to be applied against the patient's ear. A connector (e.g., cable, wire, etc.) connects a trigeminal stimulator308that may be worn on the patient's face (e.g., in the V1, V2and/or V3region, as shown inFIG.1). The controller may be connected (via a wire or wireless connection) to a user interface that may control starting/stopping of the dose, or in some variations the housing may include a control (e.g., button, dial, etc.). The dose may be preprogrammed into the controller and/or it may be adjusted. FIG.3Bshows another example of a combined trigeminal and vagus nerve stimulator for treating bleeding (e.g., for reducing bleed time) as described. InFIG.3B, the apparatus includes a housing that is configured or adapted to fit at least partially into the patient's ear, as shown. The housing may be held in the ear360, and may include a foam or other expandable material to help secure it in place. Alternatively a separate retainer may be used to hold it in/on the ear (not shown). The housing may at least partially enclose a controller (e.g., control circuitry, a battery, power control circuitry, waveform generator, a trigeminal stimulation drive and vagus stimulation drive). The apparatus may also include a vagus stimulator307that is coupled to the housing in this example, to be applied against the patient's ear. A connector (e.g., cable, wire, etc.) connects a trigeminal stimulator308that may be worn on the patient's face (e.g., in the V1, V2and/or V3region, as shown inFIG.1). The controller may be connected (via a wire or wireless connection) to a user interface that may control starting/stopping of the dose, or in some variations the housing may include a control (e.g., button, dial, etc.). The dose may be preprogrammed into the controller and/or it may be adjusted. FIG.3Cis another example of a combined trigeminal and vagus nerve stimulator for treating bleeding (e.g., for reducing bleed time) as described. InFIG.3C, the apparatus include an ear retainer312that is configured or adapted to fit at least partially over the patient's ear, as shown. The retainer holds the device over the patient's ear360, so that the vagus stimulator307is in contact with the region of the ear over the vagus nerve. The retainer also holds the controller302and may be formed of a material (e.g., mesh, etc.) that fits over the ear to help secure it in place. The controller (e.g., control circuitry, a battery, power control circuitry, waveform generator, a trigeminal stimulation drive and vagus stimulation drive) may be held by the retainer; the vagus nerve stimulator may include a biocompatible adhesive (e.g., hydrogel, etc.) for making electrical contact with the ear. A connector (e.g., cable, wire, etc.) connects the controller with a trigeminal stimulator308that may be worn on the patient's face (e.g., in the V1, V2and/or V3region, as shown inFIG.1). The controller may be connected (via a wire or wireless connection) to a user interface that may control starting/stopping of the dose, or in some variations the housing may include a control (e.g., button, dial, etc.). The dose may be preprogrammed into the controller and/or it may be adjusted. FIG.3Dis another example of a combined trigeminal and vagus nerve stimulator for treating bleeding (e.g., for reducing bleed time) as described. InFIG.3D, the apparatus is configured or adapted to fit at least partially into the patient's ear, as shown. The apparatus may be held in the ear360by an ear retainer312to secure it in place. The retainer may fit over the back of the ear and/or partially under the ear to hold the apparatus in/on the ear. InFIG.3D, the controller (e.g., control circuitry, a battery, power control circuitry, waveform generator, a trigeminal stimulation drive and vagus stimulation drive) is shown on the front; in some variations the controller may be on the back of the apparatus (e.g., held behind the ear). The apparatus may also include a vagus stimulator307that is configured to contact the ear. A connector (e.g., cable, wire, etc.) connects the controller to a first trigeminal stimulator308that may be worn on the patient's face (e.g., in the V1, V2and/or V3region, as shown inFIG.1). One or more additional trigeminal stimulators308′ may be connected as well (e.g., in parallel or in series with the first trigeminal stimulator). Thus, multiple sites may be used for trigeminal stimulation. The controller may be connected (via a wire or wireless connection) to a user interface that may control starting/stopping of the dose, or in some variations the housing may include a control (e.g., button, dial, etc.). The dose may be preprogrammed into the controller and/or it may be adjusted. In any of these apparatuses, the vagus stimulator (vagus nerve stimulator) may be an electrical or a mechanical stimulator. In variations in which the apparatus is an electrical stimulator, the vagus stimulator may include one or more electrodes that may be coupled to the patient's skin and/or may penetrate into the skin (e.g., as shallow needle electrodes). The electrodes may apply electrical energy to modulate the vagus nerve, as descried herein. Mechanical stimulators may apply mechanical energy as described above. Similarly, any of these apparatuses may include one or more trigeminal stimulators that may be configured to apply electrical stimulation (e.g., including one or more electrodes, which may include a hydrogel for making skin contact). The trigeminal stimulators may alternatively be mechanical stimulators. In any of the methods and apparatuses described herein, VNS/TNS can modulate both the patient's sympathetic nervous system (SNS) and parasympathetic nervous system (PNS) activities to reduce bleed time. As mentioned above, any of these methods and apparatuses may be configured to non-invasively applying neuromodulation of the trigeminal nerve and vagus nerve. Alternatively or additionally, invasive (e.g., using a needle electrode, implant, etc.) may be used for either VNS, TNS or both VNS and TNS. For example, non-invasive trigeminal stimulation may be applied via one or more skin surface electrodes that apply trigeminal stimulation to one or more of the subject's forehead, cheek(s), nose, tongue, or other facial skin. In some embodiments, applying the non-invasive neurostimulation to the subject's trigeminal nerve includes targeting at least one of the ophthalmic nerve, maxillary nerve, or mandibular nerve. Alternatively, in some variations, applying non-invasive neurostimulation to the subject's trigeminal nerve includes avoiding targeting at least one of the ophthalmic nerve, maxillary nerve, or mandibular nerve. Any appropriate frequency and/or amplitude and/or duration may be used. In some embodiments, applying the non-invasive neurostimulation to the subject's trigeminal nerve comprises non-invasive neurostimulation has a frequency of 1-300 (e.g., between 10-60 Hz, etc.). In some embodiments, the non-invasive neurostimulation has an intensity of 2 mV-20 V (e.g., between 0.5 V and 15 V, between 1 V and 12 V, etc.). In some embodiments, the non-invasive neurostimulation has a duty cycle of between about 20% to 70% (e.g., 1 second “on” and 1-2 seconds “off”). In some embodiments, the non-invasive neurostimulation includes a pulse width of between about 0.1 ms to 10 ms (e.g., between about .1 ms to 5 ms, between about 0.25 to 5 ms, etc.). In some embodiments, at least one of a stimulation voltage or a current is increased gradually (e.g., steps of 0.1 V). In some embodiments, the closed-loop trigeminal and/or vagus nerve stimulation is conducted based on a heart rate of the patient (e.g., subject). In some embodiments, the closed-loop trigeminal nerve stimulation is conducted based on a heart rate variability (HRV) of the patient. In some embodiments, certain parameters of the stimulation are modulated to maintain values of the parameters within a target range (e.g., preventing a hear rate or blood pressure effect, etc.). FIG.4is another example of a combined vagus and trigeminal nerve stimulator for reducing bleeding (e.g., reducing bleed time). InFIG.4, the apparatus includes a housing401enclosing a controller402(e.g., control circuitry) and a battery404. The housing is configured to fit behind a patient's ear (not shown), and insert a vagus stimulator407into the ear so that it is in contact with the region above the auricular branch of the vagus nerve. InFIG.4the apparatus also includes an additional retainer412to help anchor the vagus stimulator. A trigeminal stimulator408is connected to the controller as well (the connection shown in a wire). The trigeminal stimulator may be an electrode pad that is in electrical communication with the controller (e.g., driver, waveform generator, etc.); similarly the vagus stimulator may include an electrode (or electrode pad) that is in electrical communication with the controller. FIG.5schematically illustrates some of the components of an apparatus for combined vagus and trigeminal stimulation to reduce bleeding as described above inFIGS.3A-4. In the schematic ofFIG.5, the controller502may be separate from or integrated with one or more drivers510and waveform generators506that may generate and provide power to the trigeminal stimulator (e.g., shown here as a trigeminal electrode508) and vagus stimulator (shown as a vagus electrode507). The controller may also be connected to or include wireless communication circuitry514for wirelessly communicating522with one or more external devices520(shown in this example as a smartphone, though any external processor may be used). InFIG.5, the controller (including control circuitry) may be housed within a housing501. In some variations this housing may be configured or adapted to fit into, on and/or over a patient's ear (generically referred to as on the patient's ear). When a feature or element is herein referred to as being “on” another feature or element, it can be directly on the other feature or element or intervening features and/or elements may also be present. In contrast, when a feature or element is referred to as being “directly on” another feature or element, there are no intervening features or elements present. It will also be understood that, when a feature or element is referred to as being “connected”, “attached” or “coupled” to another feature or element, it can be directly connected, attached or coupled to the other feature or element or intervening features or elements may be present. In contrast, when a feature or element is referred to as being “directly connected”, “directly attached” or “directly coupled” to another feature or element, there are no intervening features or elements present. Although described or shown with respect to one embodiment, the features and elements so described or shown can apply to other embodiments. It will also be appreciated by those of skill in the art that references to a structure or feature that is disposed “adjacent” another feature may have portions that overlap or underlie the adjacent feature. Terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. For example, as used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items and may be abbreviated as “/”. Spatially relative terms, such as “under”, “below”, “lower”, “over”, “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if a device in the figures is inverted, elements described as “under” or “beneath” other elements or features would then be oriented “over” the other elements or features. Thus, the exemplary term “under” can encompass both an orientation of over and under. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. Similarly, the terms “upwardly”, “downwardly”, “vertical”, “horizontal” and the like are used herein for the purpose of explanation only unless specifically indicated otherwise. Although the terms “first” and “second” may be used herein to describe various features/elements (including steps), these features/elements should not be limited by these terms, unless the context indicates otherwise. These terms may be used to distinguish one feature/element from another feature/element. Thus, a first feature/element discussed below could be termed a second feature/element, and similarly, a second feature/element discussed below could be termed a first feature/element without departing from the teachings of the present invention. Throughout this specification and the claims which follow, unless the context requires otherwise, the word “comprise”, and variations such as “comprises” and “comprising” means various components can be co-jointly employed in the methods and articles (e.g., compositions and apparatuses including device and methods). For example, the term “comprising” will be understood to imply the inclusion of any stated elements or steps but not the exclusion of any other elements or steps. In general, any of the apparatuses and methods described herein should be understood to be inclusive, but all or a sub-set of the components and/or steps may alternatively be exclusive, and may be expressed as “consisting of” or alternatively “consisting essentially of” the various components, steps, sub-components or sub-steps. As used herein in the specification and claims, including as used in the examples and unless otherwise expressly specified, all numbers may be read as if prefaced by the word “about” or “approximately,” even if the term does not expressly appear. The phrase “about” or “approximately” may be used when describing magnitude and/or position to indicate that the value and/or position described is within a reasonable expected range of values and/or positions. For example, a numeric value may have a value that is +/−0.1% of the stated value (or range of values), +/−1% of the stated value (or range of values), +/−2% of the stated value (or range of values), +/−5% of the stated value (or range of values), +/−10% of the stated value (or range of values), etc. Any numerical values given herein should also be understood to include about or approximately that value, unless the context indicates otherwise. For example, if the value “10” is disclosed, then “about 10” is also disclosed. Any numerical range recited herein is intended to include all sub-ranges subsumed therein. It is also understood that when a value is disclosed that “less than or equal to” the value, “greater than or equal to the value” and possible ranges between values are also disclosed, as appropriately understood by the skilled artisan. For example, if the value “X” is disclosed the “less than or equal to X” as well as “greater than or equal to X” (e.g., where X is a numerical value) is also disclosed. It is also understood that the throughout the application, data is provided in a number of different formats, and that this data, represents endpoints and starting points, and ranges for any combination of the data points. For example, if a particular data point “10” and a particular data point “15” are disclosed, it is understood that greater than, greater than or equal to, less than, less than or equal to, and equal to 10 and 15 are considered disclosed as well as between 10 and 15. It is also understood that each unit between two particular units are also disclosed. For example, if 10 and 15 are disclosed, then 11, 12, 13, and 14 are also disclosed. Although various illustrative embodiments are described above, any of a number of changes may be made to various embodiments without departing from the scope of the invention as described by the claims. For example, the order in which various described method steps are performed may often be changed in alternative embodiments, and in other alternative embodiments one or more method steps may be skipped altogether. Optional features of various device and system embodiments may be included in some embodiments and not in others. Therefore, the foregoing description is provided primarily for exemplary purposes and should not be interpreted to limit the scope of the invention as it is set forth in the claims. The examples and illustrations included herein show, by way of illustration and not of limitation, specific embodiments in which the subject matter may be practiced. As mentioned, other embodiments may be utilized and derived there from, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. Such embodiments of the inventive subject matter may be referred to herein individually or collectively by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept, if more than one is, in fact, disclosed. Thus, although specific embodiments have been illustrated and described herein, any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description.
41,878
11857789
DETAILED DESCRIPTION While multiple embodiments are described, still other embodiments of the described subject matter will become apparent to those skilled in the art from the following detailed description and drawings, which show and describe illustrative embodiments of disclosed inventive subject matter. As will be realized, the inventive subject matter is capable of modifications in various aspects, all without departing from the spirit and scope of the described subject matter. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not restrictive. Embodiments herein describe a neurostimulation (NS) system configured to deliver NS therapy to a target region within a patient. The NS therapy is defined by one or more stimulation parameters. The NS therapy is delivered proximate to neural tissue of interest that is associated with a target region. Terms “Stimulation parameters” refer to electrical characteristics of the NS therapy. The stimulation parameters may represent a pulse width, a frequency, an amplitude, a duty cycle, an NS therapy type, and/or the like. The NS therapy type can represent a characteristic of the NS therapy delivered by the NS system. The characteristic may correspond to stimulation and/or pulse patterns of the NS therapy. The pulse patterns may be a burst stimulation waveform or a tonic stimulation waveform of the NS therapy. The tonic stimulation waveform represents a pulse repeated at a rate defined by the duty cycle. The burst stimulation waveform represents a series of pulses grouped to form a pulse train. The pulse train may be repeated at a cycle rate defined by the duty cycle. The term “active,” when referring to an electrode, shall mean a stimulation electrode that is utilized to deliver stimulation in connection with one or more types of therapy for the present patient. The term “inactive,” when referring to an electrode, shall mean an unused, non-stimulation electrode that is not used to deliver stimulation in connection with any type of therapy for the present patient. The inactive electrode may also be referred to as an unused or non-stimulation electrode as no therapy is delivered through the electrode. As explained herein, one or more inactive electrodes are used as part of a feedback control loop in connection with substantially minimizing MRI/EMI induced stimulation interference. The terms “electromagnetic interference” and “EMI” shall mean interference experienced by an NS system when exposed to electromagnetic fields. One non-limiting example is when an NS system is in the presence of a magnetic resonance imaging (MRI) field, the NS system will experience EMI. The terms “actively emulated passive discharge profile” and “AEPD profile” refer to a shape of a curve plotting charge, voltage and/or current over time while discharging a residual voltage built upon across a load and/or between anode and cathode electrodes of an NS system. The terms “non-electrode wire” and “dummy wire” shall mean a conductor provided within a stimulation lead or routed with insulation substantially alongside the outside of a stimulation lead that is coupled (at a proximal end) to a current regulator circuit (as described herein) and is not coupled to an electrode at the distal end. By way of example, the non-electrode wire may have a distal end that is allowed to electrically float relative to the human tissue, and a proximal end which connects to a current regulator circuit in a NS system. The non-electrode wire may be located at an intermediate and/or distal portion of a body of the lead alongside other conductive wires that are coupled to the electrodes utilized by the NS system. The non-electrode wire may be enclosed within the lead or routed with insulation substantially alongside the outside of the lead body to avoid direct electrical contact with human tissue. Overview In accordance with embodiments herein, methods and systems implement a current regulator (CR) circuit that exhibits very efficient performance, while utilizing a limited circuit area utilized within the NS system and provides a relatively non-complex electronic control circuit. The CR circuit provides an improved and optimized control architecture for active emulation of passive discharge in the presence of EMI. Embodiments herein provide a compact and efficient current regulator circuit that affords an advantageous imitation scheme for achieving numerous advancements in connection with implementing an actively emulated passive discharge. Embodiments herein implement the actively emulated passive discharge even while in the presence of MRI scans and other EMI events, thereby enabling continuous delivery of deep brain stimulation therapy, such as in connection with patients experiencing debilitating motion disorders and other brain disorders. Embodiments herein build upon and highly optimize actively emulated passive discharge configurations. Among other things, methods and systems herein utilize a self-contained current regulator circuit architecture for controlling an exponentially decreasing discharge current. Embodiments herein control the discharge current in a manner that alleviates the need for any kind of model extraction of an IPG load. Embodiments herein further alleviate the need to determine an initial discharge current or to determine discharge control parameters that may be otherwise appropriate for an actively emulated passive discharge control circuit program specifically for use while in the presence of a particular type of MRI field strength or scan type. An EMI antenna can be utilized to sense and mitigate interference voltages induced by EMI. By way of example, the EMI antenna may include one or more Kelvin connect electrodes or unused electrodes in a lead that are not being used to deliver stimulation therapy to the patient. The unused electrode can operate as the EMI antenna to sense and mitigate interference voltages induced by EMI. Additionally or alternatively, the EMI antenna may be constructed as a “dummy” wire (also referred to as a non-electrode wire) provided within the lead or routed with insulation substantially alongside the outside of the lead and arranged to extend alongside other stimulation wires in the lead. The dummy wire may not electrically conduct with human tissue, and thus may not be considered to be an “electrode.” Among other things, embodiments herein utilize the insight that, during an MRI scan or other type of EMI event (collectively EMI), the interference voltages induced on each electrode of a DBS lead are very similar (e.g., nearly identical) and/or exhibit a common mode characteristic to all electrodes. More specifically, the EMI induces similar voltage variations at each of the electrodes at any given instant in time. Embodiments herein utilize the foregoing point by designating an inactive or unused electrode, of a neural stimulation lead, to provide a feedback control signal to the current regulator circuit. As a nonlimiting example, the unused electrode may be configured as a “Kelvin connection” electrode. The feedback control signal is utilized by the current regulator circuit for simple, effective and efficient control of an actively emulated passive discharge. The feedback control, via a Kelvin connection electrode, is highly effective at canceling out interference voltages induced by EMI, as well as greatly simplifying an implementation of the AEPD operation and eliminates the need for numerous other structures, such as an IPG load calculation, excess memory storage for EMI related discharge parameters or settings, a complex discharge control state machine and/or extensive computations for the imitation and control of the AEPD operation while in the presence of EMI. FIG.1depicts a schematic block diagram of an embodiment of a neurostimulation (NS) system100. The NS system100is configured to generate electrical pulses (e.g., excitation pulses) for application to neural tissue of the patient according to one embodiment. For example, the NS system100may be adapted to stimulate spinal cord tissue, dorsal root, dorsal root ganglion (DRG), peripheral nerve tissue, deep brain tissue, cortical tissue, cardiac tissue, digestive tissue, pelvic floor tissue, and/or any other suitable neural tissue of interest within a body of a patient. The NS system100includes an implantable pulse generator (IPG)150that is adapted to generate electrical pulses for application to tissue of a patient. The IPG150typically comprises a metallic housing or can158that encloses a controller circuit151, pulse generating circuitry152, a charging coil153, a battery154, a communication circuit155, battery charging circuitry156, switching circuitry157, memory161, and/or the like. The communication circuit155may represent hardware that is used to transmit and/or receive data along a uni-directional communication link and/or bi-directional communication link (e.g., with an external device160). The controller circuit151is configured to control the operation of the IPG150. The controller circuit151may include one or more processors, a central processing unit (CPU), one or more microprocessors, or any other electronic component capable of processing input data according to program instructions. Optionally, the controller circuit151may include and/or represent one or more hardware circuits or circuitry that include, are connected with, or that both include and are connected with one or more processors, controllers, and/or other hardware logic-based devices. Additionally or alternatively, the controller circuit151may execute instructions stored on a tangible and non-transitory computer readable medium (e.g., the memory161). The IPG150may include a separate or an attached extension component170. The extension component170may be a separate component. For example, the extension component170may connect with a “header” portion of the IPG150, as is known in the art. If the extension component170is integrated with the IPG150, internal electrical connections may be made through respective conductive components. Within the IPG150, electrical pulses are generated by the pulse generating circuitry152and are provided to the switching circuitry157. The switching circuitry157connects to outputs of the IPG150. Electrical connectors (e.g., “Bal-Seal” connectors) within the connector portion171of the extension component170or within the IPG header may be employed to conduct various stimulation pulses. The terminals of one or more leads110are inserted within the connector portion171or within the IPG header for electrical connection with respective connectors. The pulses originating from the IPG150are provided to the one or more leads110. The pulses are then conducted through the conductors of the lead110and applied to tissue of a patient via an electrode array111. Any suitable known or later developed design may be employed for connector portion171. The electrode array111may be positioned on a paddle structure of the lead110. For example, in a planar formation on a paddle structure as disclosed in U.S. Provisional Application No. 61/791,288, entitled, “PADDLE LEADS FOR NEUROSTIMULATION AND METHOD OF DELIVERYING THE SAME,” which is expressly incorporated herein by reference. The electrode array111includes a plurality of electrodes112aligned along corresponding rows and columns. Each of the electrodes112are separated by non-conducting portions of the paddle structure, which electrically isolate each electrode112from an adjacent electrode112. The non-conducting portions may include one or more insulative materials and/or biocompatible materials to allow the lead110to be implantable within the patient. Non-limiting examples of such materials include polyimide, polyetheretherketone (PEEK), polyethylene terephthalate (PET) film (also known as polyester or Mylar), polytetrafluoroethylene (PTFE) (e.g., Teflon), or parylene coating, polyether bloc amides, polyurethane. The electrodes112may be configured to emit pulses in an outward direction. Optionally, the IPG150may have one or more leads110connected via the connector portion171of the extension component170or within the IPG header. For example, a DRG stimulator, a steerable percutaneous lead, and/or the like. Additionally or alternatively, the electrodes112of each lead110may be configured separately to emit excitation pulses. Leads FIGS.2A-2I, respectively, depict stimulation portions200-208for inclusion at the distal end of the lead110. For example, the stimulation portions200-208depict a conventional stimulation portion of a “percutaneous” lead with multiple electrodes112. The stimulation portions200-208depict a stimulation portion including several segmented electrodes112. Example fabrication processes are disclosed in U.S. patent application Ser. No. 12/895,096, entitled, “METHOD OF FABRICATING STIMULATION LEAD FOR APPLYING ELECTRICAL STIMULATION TO TISSUE OF A PATIENT,” which is incorporated herein by reference. Stimulation portions204-208include multiple electrodes112on alternative paddle structures than shown inFIG.1. In connection toFIG.1, the lead110may include a lead body172of insulative material about a plurality of conductors within the material that extend from a proximal end of lead110, proximate to the IPG150, to its distal end. The conductors electrically couple a plurality of the electrodes112to a plurality of terminals (not shown) of the lead110. The terminals are adapted to receive electrical pulses and the electrodes112are adapted to apply the pulses to the stimulation target of the patient. It should be noted that although the lead110is depicted with twenty electrodes112, the lead110may include any suitable number of electrodes112(e.g., less than twenty, more than twenty) as well as terminals, and internal conductors. Although not required for all embodiments, the lead body172of the lead110may be fabricated to flex and elongate upon implantation or advancing within the tissue (e.g., nervous tissue) of the patient towards the stimulation target and movements of the patient during or after implantation. By fabricating the lead body172, according to some embodiments, the lead body172or a portion thereof is capable of elastic elongation under relatively low stretching forces. Also, after removal of the stretching force, the lead body172may be capable of resuming its original length and profile. For example, the lead body may stretch 10%, 20%, 25%, 35%, or even up or above to 50% at forces of about 0.5, 1.0, and/or 2.0 pounds of stretching force. Fabrication techniques and material characteristics for “body compliant” leads are disclosed in greater detail in U.S. Provisional Patent Application No. 60/788,518, entitled “Lead Body Manufacturing,” which is expressly incorporated herein by reference. For implementation of the components within the IPG150, a processor and associated charge control circuitry for an IPG is described in U.S. Pat. No. 7,571,007, entitled “SYSTEMS AND METHODS FOR USE IN PULSE GENERATION,” which is expressly incorporated herein by reference. Circuitry for recharging a rechargeable battery (e.g., battery charging circuitry156) of an IPG using inductive coupling and external charging circuits are described in U.S. Pat. No. 7,212,110, entitled “IMPLANTABLE DEVICE AND SYSTEM FOR WIRELESS COMMUNICATION,” which is expressly incorporated herein by reference. An example and discussion of “constant current” pulse generating circuitry (e.g., pulse generating circuitry152) is provided in U.S. Patent Publication No. 2006/0170486 entitled “PULSE GENERATOR HAVING AN EFFICIENT FRACTIONAL VOLTAGE CONVERTER AND METHOD OF USE,” which is expressly incorporated herein by reference. One or multiple sets of such circuitry may be provided within the IPG150. Different pulses on different electrodes112may be generated using a single set of the pulse generating circuitry152using consecutively generated pulses according to a “multi-stimset program” as is known in the art. Complex stimulation parameters may be employed such as those described in U.S. Pat. No. 7,228,179, entitled “Method and apparatus for providing complex tissue stimulation patterns,” and International Patent Publication Number WO 2001/093953 A1, entitled “NEUROMODULATION THERAPY SYSTEM,” which are expressly incorporated herein by reference. Alternatively, multiple sets of such circuitry may be employed to provide pulse patterns (e.g., the tonic stimulation waveform, the burst stimulation waveform) that include generated and delivered stimulation pulses through various electrodes112of the one or more leads110as is also known in the art. Various sets of stimulation parameters may define the characteristics and timing for the pulses applied to the various electrodes112as is known in the art. Although constant excitation pulse generating circuitry is contemplated for some embodiments, any other suitable type of pulse generating circuitry may be employed such as constant voltage pulse generating circuitry. The external device160may be implemented to charge/recharge the battery154of the IPG150(although a separate recharging device could alternatively be employed), to access the memory161, to program the IPG150when implanted within the patient, to communicate triggering events to the NS system100, and/or the like.FIG.3depicts a schematic block diagram of an embodiment of the external device160. The external device160may be a workstation, a portable computer, an NS system programmer, a PDA, a cell phone, a smart phone, a tablet, and/or the like. FIG.3illustrates a block diagram of an external device formed in accordance with embodiments herein. The external device160includes an internal bus that connects/interfaces with a Central Processing Unit (CPU)302, ROM304, RAM306, a hard drive308, a speaker310, a printer312, a CD-ROM drive314, a floppy drive316, a parallel I/O circuit318, a serial I/O circuit320, a display322, a touch screen324, a standard keyboard connection326, custom keys328, and a radio frequency (RF) subsystem330. The internal bus is an address/data bus that transfers information between the various components described herein. The hard drive308may store operational programs as well as data, such as waveform templates and detection thresholds. The CPU302is configured to control the operation of the external device160. The CPU302may include one or more processors. Optionally, the CPU302may include one or more microprocessors, a graphics processing unit (GPU), or any other electronic component capable of processing inputted data according to specific logical instructions. Optionally, the CPU302may include and/or represent one or more hardware circuits or circuitry that include, are connected with, or that both include and are connected with one or more processors, controllers, and/or other hardware logic-based devices. Additionally or alternatively, the CPU302may execute instructions stored on a tangible and non-transitory computer readable medium (e.g., the ROM304, the RAM306, hard drive308). Optionally, the CPU302may include RAM or ROM memory, logic and timing circuitry, state machine circuitry, and/or I/O circuitry to interface with the NS system100. The display322may be connected to a video display332. The touch screen324may display graphic information relating to the NS system100. The display322displays various information related to the processes described herein. The touch screen324accepts a user's touch input334when selections are made. The keyboard326(e.g., a typewriter keyboard336) allows the user to enter data to the displayed fields, as well as interface with the RF subsystem330. The touch screen324and/or the keyboard326is configured to allow the user to operate the NS system100. The external device160may be controlled by the user (e.g., doctor, clinician, patient) through the touch screen324and/or the keyboard326allowing the user to interact with the NS system100. The touch screen324and/or the keyboard326may permit the user to move electrical stimulation along and/or across one or more of the lead(s)110using different electrode112combinations, for example, as described in U.S. Patent Application Publication No. 2009/0326608, entitled “METHOD OF ELECTRICALLY STIMULATING TISSUE OF A PATIENT BY SHIFTING A LOCUS OF STIMULATION AND SYSTEM EMPLOYING THE SAME,” which is expressly incorporated herein by reference. Optionally, the touch screen324and/or the keyboard326may permit the user to designate which electrodes112are to stimulate (e.g., emit excitation pulses, in an anode state, in a cathode state) the stimulation target. Custom keys328turn on/off338the external device160. The printer312prints copies of reports340for a physician to review or to be placed in a patient file, and the speaker310provides an audible warning (e.g., sounds and tones342) to the clinician and/or patient. The parallel I/O circuit318interfaces with a parallel port344. The serial I/O circuit320interfaces with a serial port346. The floppy drive316accepts diskettes348. Optionally, the floppy drive316may include a USB port or other interface capable of communicating with a USB device such as a memory stick. The CD-ROM drive314accepts CD ROMs350. The RF subsystem330includes a central processing unit (CPU)352in electrical communication with an RF circuit354. The RF subsystem330is configured to receive and/or transmit information with the NS system100. The RF subsystem330may represent hardware that is used to transmit and/or receive data along a uni-directional and/or bi-directional communication link. The RF subsystem330may include a transceiver, receiver, transceiver and/or the like and associated circuitry (e.g., antennas) for wirelessly communicating (e.g., transmitting and/or receiving) with the NS system100. For example, protocol firmware for transmitting and/or receiving data along the uni-directional and/or bi-directional communication link may be stored in the memory (e.g., the ROM304, the RAM306, the hard drive308), which is accessed by the CPU352. The protocol firmware provides the network protocol syntax for the CPU352to assemble data packets, establish and/or partition data received along the uni-directional and/or bi-directional communication links, and/or the like. The uni-directional and/or bi-directional communication link can represent a wireless communication (e.g., utilizing radio frequency (RF)) link for exchanging data (e.g., data packets) between the NS system100and the external device160. The uni-directional and/or bi-directional communication link may be based on a customized communication protocol and/or a standard communication protocol, such as Bluetooth, NFC, RFID, GSM, infrared wireless LANs, HIPERLAN, 3G, LTE, and/or the like. Additionally or alternatively, the RF subsystem330may be operably coupled to a “wand”165(FIG.1). The wand165may be electrically connected to a telemetry component166(e.g., inductor coil, RF transceiver) at the distal end of wand165through respective wires (not shown) allowing bi-directional communication with the NS system100. For example, the user may initiate communication with the NS system100by placing the wand165proximate to the NS system100. Preferably, the placement of the wand165allows the telemetry system of the wand165to be aligned with the communication circuit155. Also, the external device160may permit operation of the IPG150according to one or more NS programs or therapies to treat the patient. For example, the NS program corresponds to the NS therapy and/or executed by the IPG150. Each NS program may include one or more sets of stimulation parameters of the pulses including pulse amplitude, stimulation level, pulse width, pulse frequency or inter-pulse period, pulse repetition parameter (e.g., number of times for a given pulse to be repeated for respective stimset during execution of program), biphasic pulses, monophasic pulses, etc. The IPG150may modify its internal parameters in response to the control signals from the external device160to vary the stimulation characteristics of the stimulation pulses transmitted through the lead110to the tissue of the patient. NS systems, stimsets, and multi-stimset programs are discussed in PCT Publication No. WO 01/93953, entitled “NEUROMODULATION THERAPY SYSTEM,” and U.S. Pat. No. 7,228,179, entitled “METHOD AND APPARATUS FOR PROVIDING COMPLEX TISSUE STIMULATION PATTERNS,” which are expressly incorporated herein by reference. Directing attention toFIG.4, stimulation system400is adapted according to an embodiment and is shown in a high-level functional block diagram. In operation, stimulation system400generates and applies a stimulus to tissue or a certain location of a body. Stimulation system400of the illustrated embodiment includes a generator portion, shown as implantable pulse generator (IPG)410, providing a stimulation or energy source, stimulation portion, shown as lead430, for application of the stimulus pulse(s), and an optional external controller, shown as programmer/controller440, to program and/or control implantable pulse generator410via a wireless communications link. IPG410may be implanted within a living body (not shown) for providing electrical stimulation from IPG410to a selected area of the body via lead430, perhaps under control of external programmer/controller440. It should be appreciated that, although lead430is illustrated to provide a stimulation portion of stimulation system400configured provide stimulation remotely with respect to the generator portion of stimulation system400, a lead as described herein is intended to encompass a variety of stimulation portion configurations. For example, lead430may comprise a microstimulator electrode disposed adjacent to a generator portion. Furthermore, a lead configuration may include more (e.g., 8, 16, 32, etc.) or fewer (e.g., 1, 2, etc.) electrodes than those represented in the illustrations. As explained herein, the lead430may include an EMI antenna configured to provide an EMI feedback signal indicative of an amount of interference voltage induced by the EMI. The EMI antenna may be implemented in various manners, such as utilizing an inactive electrode (e.g., one of electrodes432-435) and/or utilizing a non-electrode segment of wire460provided within the lead or routed with insulation substantially alongside the outside of the lead430. The wire460is not connected to any of the electrodes432-435and may be held within a body of the lead430to prevent the wire460from contacting human tissue. Alternatively, the wire460may be fully insulated and routed substantially alongside the outside of the lead430. IPG410may comprise a self-contained implantable pulse generator having an implanted power source such as a long-lasting or rechargeable battery. Alternatively, IPG410may comprise an externally-powered implantable pulse generator receiving at least some of the required operating power from an external power transmitter, preferably in the form of a wireless signal, which may be radio frequency (RF), inductive, etc. IPG410of the illustrated embodiment includes voltage regulator411, power supply412, receiver413, microcontroller (or microprocessor)414, output driver circuitry415, and clock416, as are described in further detail below. Power supply412provides a source of power, such as from battery421(battery421may comprise a non-rechargeable (e.g., single use) battery, a rechargeable battery, a capacitor, and/or like power sources), to other components of IPG410, as may be regulated by voltage regulator411. Charge control422of embodiments provides management with respect to battery421. Receiver413of embodiments provides data communication between microcontroller414and controller442of external programmer/controller440, via transmitter441. It should be appreciated that although receiver413is shown as a receiver, a transmitter and/or transceiver may be provided in addition to or in the alternative to receiver413, depending upon the communication links desired. Receiver413of embodiments, in addition to or in the alternative to providing data communication, provides a conduit for delivering energy to power supply412, such as where RF or inductive recharging of battery421is implemented. Microcontroller414provides control with respect to the operation of IPG410, such as in accordance with a program provided thereto by external programmer/controller440. Output driver circuitry415generates and delivers pulses to selected ones of electrodes432-435under control of microcontroller414. For example, voltage multiplier451and voltage/current control452may be controlled to deliver a constant current pulse of a desired magnitude, duration, and frequency to a load present with respect to particular ones of electrodes432-435. Clock416preferably provides system timing information, such as may be used by microcontroller414in controlling system operation, as may be used by voltage multiplier451in generating a desired voltage, etc. Lead430of the illustrated embodiment includes lead body431, preferably incarcerating a plurality of internal conductors coupled to lead connectors (not shown) to interface with lead connectors453of IPG410. Lead430further includes electrodes432-435, which are preferably coupled to the aforementioned internal conductors. The internal conductors provide electrical connection from individual lead connectors to each of a corresponding one of electrodes432-435. In the exemplary embodiment the lead430is generally configured to transmit one or more electrical signals from IPG410for application at, or proximate to, a spinal nerve or peripheral nerve, brain matter, muscle, or other tissue via electrodes432-435. IPG410is capable of controlling the electrical signals by varying signal parameters such as intensity, duration and/or frequency in order to deliver a desired therapy or otherwise provide operation as described herein. Although the embodiment illustrated inFIG.4includes 4 electrodes, it should be appreciated that any number of electrodes, and corresponding conductors, may be utilized according to some embodiments. Moreover, various types, configurations and shapes of electrodes (and lead connectors) may be used according to some embodiments. An optional lumen (not shown) may extend through the lead430, such as for use in delivery of chemicals or drugs or to accept a stylet during placement of the lead within the body. Additionally or alternatively, the lead (stimulation portion) and IPG (generator portion) may comprise a unitary construction, such as that of a microstimulator configuration. As mentioned above, external programmer/controller440of embodiments provides data communication with IPG410, such as to provide control (e.g., adjust stimulation settings), provide programming (e.g., alter the electrodes to which stimulation pulses are delivered), etc. Accordingly, external programmer/controller440of the illustrated embodiment includes transmitter441, for establishing a wireless link with IPG410, and controller442, to provide control with respect to programmer/controller414and IPG410. Additionally or alternatively, external programmer/controller440may provide power to IPG410, such as via RF transmission by transmitter441. Optionally, however, a separate power controller may be provided for charging the power source within IPG410. Additional detail with respect to pulse generation systems and the delivery of stimulation pulses may be found in U.S. Pat. No. 6,609,031, entitled “MULTIPROGRAMMABLE TISSUE STIMULATOR AND METHOD,” the disclosure of which is hereby incorporated herein by reference. Similarly; additional detail with respect to pulse generation systems and the delivery of stimulation pulses may be found in the above referenced patent application entitled “MULTI-PROGRAMMABLE TRIAL STIMULATOR.” Having generally described stimulation system400above, the discussion which follows provides detail with respect to various functional aspects of stimulation system400according to some embodiments. Although the below embodiments are described with reference to stimulation system400, and IPG410thereof, it should be appreciated that the inventive concepts described herein are not limited to application to the exemplary system and may be used in a wide variety of medical devices. Voltage Multiplier Output Voltage Directing attention toFIG.5, detail with respect to an embodiment of voltage/current control452ofFIG.4for providing voltage multiplier voltage control is shown. Voltage/current control452of the illustrated embodiment provides automatic and manual voltage control, allowing incrementing and decrementing of the output voltage, with respect to voltage multiplier451. In a manual mode of one embodiment, the output voltage setting is controlled by microcontroller414providing a set control signal to voltage/current control452. Accordingly, in this manual mode, microcontroller414is involved in the changes to the output voltage of voltage multiplier451in terms of incrementing or decrementing the values. However, in an automatic mode of one embodiment, voltage/current control452controls the changes to the output voltage of voltage multiplier451, and thus there need not be any processing overhead on the part of microcontroller414to determine the optimal value for the output voltage of voltage multiplier451. Voltage multiplier451utilized according to some embodiments preferably comprises a fractional voltage multiplier, such as may provide output voltages in fractional multiples of a supply voltage. Additional detail with respect to fractional voltage multipliers as may be utilized according to some embodiments is provided in U.S. Pat. No. 7,180,760 entitled “FRACTIONAL VOLTAGE CONVERTER”, filed Apr. 12, 2005, the complete subject matter of which is expressly incorporated herein by reference. In operation of IPG410according to some embodiments, a goal is to provide a power source to deliver a particular amount of current to load501(such as may comprise a portion of a human body into which lead430is implanted) via selected ones of electrodes432-435. It should be appreciated that, as set forth in Ohm's law, a particular amount of voltage provided by voltage multiplier451will be needed to deliver a desired level of current through load501. However, providing a voltage level substantially in excess of the voltage needed to deliver the desired current may be undesirable. For example, voltage in excess to that needed for delivery of the desired current may be dissipated as heat or otherwise sunk, thereby resulting in inefficient use of energy from battery421. Moreover, if the output voltage provided by voltage multiplier451were not set to a limit somewhat near that needed to deliver the desired current, a change in load501(such as by movement of lead130within the patient) could result in over stimulation or other undesired results. As explained herein, the lead330includes an EMI antenna that is utilized to sense and mitigate interference voltages induced by EMI. By way of example, the EMI antenna may include one or more Kelvin connect electrodes or unused electrodes (e.g., any one or more of the electrodes) that are not being used to deliver stimulation therapy to the patient. Additionally or alternatively, the EMI antenna may be constructed as a “dummy” wire provided within the lead or routed with insulation substantially alongside the outside of the lead and arranged to extend alongside other stimulation wires in the lead. The dummy wire may not touch human tissue, and thus may not be considered to be an “electrode.” Accordingly, voltage multiplier451and voltage/current control452of some embodiments cooperate to provide a voltage limited, constant current source. In providing the foregoing, voltage/current control452of the illustrated embodiment comprises detector551that monitors voltages as provided by voltage multiplier451. When it is determined that the output voltage of voltage multiplier451is in excess (perhaps by a predetermined amount, such as a fractional voltage step amount) of what is needed to provide a desired current, detector551can provide a control signal to voltage set552to decrement the voltage. Voltage set552may, in turn, provide a control signal to voltage multiplier451to select an appropriate, lower, voltage (perhaps in one or more decremental steps). Similarly, when it is determined that the output voltage of voltage multiplier451is below what is needed to provide a desired current, detector551can provide a control signal to voltage set552to increment the voltage. Voltage set552may, in turn, provide a control signal to voltage multiplier451to select an appropriate, higher, voltage (perhaps in one or more incremental steps). Feedback circuit520provides detail with respect to providing information to detector551useful in making voltage increment/decrement determinations. The voltage limit553sets a limit beyond which voltage/current control452cannot, by itself increment the output voltage. Accordingly, when a voltage limit set by voltage limit553is reached, voltage/current control452may provide a control signal to microcontroller, such as to notify an operator of the limit being reached, for a determination with respect to whether the limit should be adjusted, etc. Additionally, microcontroller, a clinician, or other user may manually provide voltage selection with respect to voltage multiplier451, such as during trial stimulation, etc. Accordingly, a voltage set control signal may be provided to voltage set552, such as by microcontroller, to override voltage selection as provided by detector551, if desired. Current Regulation Circuit FIG.6Aillustrates a block diagram of therapy and discharge control circuits of an NS system utilized in accordance with embodiments herein. The neurostimulation (NS) system650comprises an array of electrodes652configured to be implanted within a patient and positioned proximate to neural tissue of interest that is associated with the target region. As a nonlimiting example, the NS system may be configured for use with deep brain stimulation, with the array of electrodes positioned within the brain proximate to neural tissue of interest. The array of electrodes652includes one or more active electrodes E1. The one or more active electrodes E1represent a cathode. The NS system650also includes a Case electrode which may be configured to be an anode electrode. Optionally, an electrode from the array of electrodes652may be configured to operate as the anode electrode. While the examples herein are described in connection with a single electrode E1as the active electrode used as a cathode, it is recognized that in many embodiments, two or more active electrodes may be utilized. When two or more active electrodes E1are utilized, embodiments herein may implement discharge operations in the presence of EMI events in a common discharge operation, and/or as separate discharge operations. For example, all active electrodes E1may be connected to one another during the discharge operation in a common manner to collectively and jointly discharge any residual voltage. As another example, separate subsets of the group of active electrodes E1may be connected to separate current regulator circuits to have residual voltages discharged separately. The array of electrodes may include one or more inactive electrodes Ea One or more of the inactive electrodes E0may be utilized as the EMI antenna to sense and mitigate interference voltages induced by EMI. By way of example, the EMI antenna may include one or more Kelvin connect electrodes in the NS lead that are not being used to deliver stimulation therapy to the patient. The NS system650includes a control circuit654that is configured to control delivery of a NS therapy during therapy delivery intervals between the active cathode electrode and the anode electrode. The NS therapy is delivered through the active cathode electrode E1proximate to neural tissue of interest that is associated with a target region. The array of electrodes652develop a residual voltage (e.g., an accumulated charge) over the therapy delivery intervals. The residual voltage is induced/developed between the anode electrode (e.g., CASE) and the active cathode electrode(s) over the course of the NS therapy. A current regulator (CR) circuit656is connected to the active cathode electrode E1. The CR circuit656is configured to control current flow through the cathode electrode E1. The control circuit654is coupled to the CR circuit656and, during the discharge operation, the control circuit654is configured to manage the CR circuit656to control the discharge current flow over the discharge operation to discharge the residual voltage in a manner that follows the AEPD profile between the therapy delivery intervals. While embodiments herein are described in connection with the use of a single AEPD profile, it is recognized that more than one AEPD profile may be utilized. For example, when an NS therapy delivers different types of stimulation to different combinations of electrodes, different residual voltages may build up upon the corresponding combinations of electrodes. Accordingly, a separate AEPD profile may be assigned in connection with each combination of electrodes for the corresponding residual voltage. By way of example, the AEPD profile may be managed in accordance with the methods and systems described in a co-pending application Ser. No. 16/364,975 filed Mar. 26, 2019, entitled “EMULATING PASSIVE DISCHARGE OF ELECTRODES USING A PROGRAMMABLE EXPONENTIALLY DECREASING AMPLITUDE DISCHARGE CURRENT”, the complete subject matter of which is expressly incorporated herein by reference in its entirety. During the discharge operation, the CR circuit656is connected to an EMI antenna657. The CR circuit656receives, as a first input, an electromagnetic interference (EMI) feedback signal from the EMI antenna657and regulates the discharge current flow through the active electrodes based on the EMI feedback signal to maintain the AEPD profile over the discharge operation even while in the presence of an EMI event. The EMI feedback signal is indicative of a voltage gradient created at the EMI antenna657based at least in part on an electromagnetic field surrounding the NS system. As explained hereafter, the CR circuit656may comprise an error amplifier and a transistor. The error amplifier is configured to hold the EMI feedback signal at a reference voltage and provide an output based thereon, while the transistor is configured to regulate the discharge current flow through the active electrodes based on the output of the error amplifier to maintain the AEPD profile while in the presence of the EMI event. The NS system650further comprises a reference voltage source658that is configured to supply a reference voltage signal as a second input to the CR circuit656. The CR circuit656regulates the current flow through the cathode electrode (E1) based on a difference between the EMI feedback signal and the reference voltage signal. A voltage multiplier660is connected to, and controlled by, the control circuit654. The voltage multiplier660defines an output voltage of the NS system650when delivering the NS therapy. The control circuit654, CR circuit656, reference voltage source658and voltage multiplier660are hermetically sealed within the housing of an IPG. The EMI antenna657and the active cathode electrode E1may be configured to have substantially similar electrical properties. The signal from the EMI antenna657is supplied to the CR circuit656to allow the CR circuit656to manage discharge of residual voltage built up across the load of the IPG after stimulation. The CR circuit656manages discharge of the residual voltage such that the AEPD profile of the discharge is substantially similar to the discharge profile exhibited during a passive discharge operation. While the present example is described in connection with delivery of a single series of stimulation pulses, followed by a single discharge interval, it is understood that the control circuit654is configured to deliver the NS therapy repeatedly over successive therapy delivery intervals that are separated by corresponding successive discharge operations while in the presence of the EMI event. The CR circuit656is configured to modulate the discharge current flow over one or more of the discharge operations, based on the EMI feedback signal, in order to follow a common or multiple AEPD profiles to compensate for voltage fluctuation caused by the EMI event. In accordance with embodiments herein, the NS system650does not need to be programmed with a particular discharge profile and does not require any prior knowledge of the level of the residual voltage across the load after stimulation. Also, the NS system650does not need model parameters for the load to effectively and efficiently discharge the load with an exponential decreasing discharge current that closely emulates or mimics the discharge current during a passive discharge operation. The NS system650maintains a high impedance electrical loop between the IPG case and the active stimulation electrode(s) during a patient MRI scan and/or when subject to other types of EMI, to minimize stimulation interference and other concerns. By achieving a high impedance electrical loop behavior during discharge, the CR circuit656is able to manage an AEPD operation after stimulation while avoiding degradation of patient therapy from EMI. The CR circuit656also mitigates patient safety concerns while allowing stimulation therapy to be continuously delivered during an MRI scan and/or in the presence of other EMI events. Other benefits of the embodiments herein include: 1) alleviating a need for a large amount of IPG memory, which would otherwise be necessary to store numerous digital parameters or values (e.g., the digital representation of the amplitude settings for the CR circuit) to control an exponentially decreasing discharge current; 2) alleviating a need for a complex digital state machine, which would otherwise be necessary to control the timing and reading of parameters for controlling the discharge current; 3) alleviating a need for extracting a model for the IPG load, which would otherwise be necessary to determine the control parameters for emulating passive discharge; 4) eliminating the effects of model errors which could introduce undesirable stimulation artifacts or could further degrade stimulation efficiency or efficacy; and 5) alleviating the need for an extensive number of calculations needed for computing the control parameters required for the CR circuit to otherwise emulate passive discharge. FIG.6Billustrates a more detailed schematic diagram of the block diagram ofFIG.6Aformed in accordance with embodiments herein. The CR circuit602is shown as configured for post-stimulation discharge to control an actively emulated passive discharge (AEPD) of charge buildup at two or more electrodes while the system is in a presence of MRI/EMI interference. The CR circuit602includes an error amplifier620, an output of which drives the gate of a MOSFET transistor622. The error amplifier620is configured to hold the EMI feedback signal at a reference voltage and provide an output based thereon, while the transistor622is configured to regulate the discharge current flow through the active electrodes based on the output of the error amplifier to maintain the AEPD profile while in the presence of the EMI event. The error amplifier620may be implemented as an operational amplifier or other equivalent circuit. The error amplifier620is configured to handle moderately large residual voltages that may build up after stimulation. By way of example, a residual voltage of 5-10 V may build up across the load of the NS system after DBS therapy delivery. The transistor622is coupled between the case electrode and a variable resistor624. The transistor622regulates discharge current flow from the active electrodes, through the drain and source of the transistor622and the variable resistor624. The transistor622regulates the discharge current flow based on the output of the error amplifier620. The output of the error amplifier620varies based on the voltage difference between the input terminals thereof. During an AEPD operation, one input terminal (e.g., the positive terminal) of the error amplifier620is coupled to the EMI antenna657through a high voltage (HV) level shift capacitor616, while the other input terminal (e.g., the negative terminal) of the error amplifier620is connected to a common mode voltage reference source618. The voltage reference source618is controlled by the microprocessor or other control circuit to maintain a common mode reference voltage at the input terminal of the error amplifier620during the AEPD operation. By way of example, the microprocessor or other control circuit may selectively decrease the reference voltage maintained at the reference source618(e.g., through a series of downward voltage steps) over the duration of the AEPD operation. The HV level shift capacitor616is configured to absorb large voltages that may be introduced into the circuit. For example, the HV level shift capacitor616is configured to offset/absorb a difference between the multiplier voltage VM (which was introduced at the active anode electrode case during stimulation) and the common mode voltage VCM (provided to the negative input terminal of the error amplifier620). The HV level shift capacitor616is coupled to the positive input terminal of the error amplifier620at the end of a stimulation phase, which corresponds to the beginning of a discharge phase. Before the HV level shift capacitor616is connected to the positive input terminal of the error amplifier620, the capacitor616has already been charged with a desired voltage corresponding to the difference of the multiplier voltage VM and common mode voltage VCM. At the end of the discharge phase, the capacitor616is removed from the connection to the positive input terminal of the error amplifier620, before the next stimulation phase. The CR circuit602is connected to one or more active electrodes E1, the EMI antenna657and an IPG housing electrode denoted “Case”. The “active” electrodes E1represent electrodes utilized to deliver stimulation in one or more types of therapy. The EMI antenna657is configured to deliver feedback to the error amplifier620(e.g., an operational amplifier) regarding MRI/EMI interference experienced at the EMI antenna657. The feedback may be provided in the form of a voltage fluctuation over time that is caused by the interference. The EMI antenna657may have substantially similar dimensions and electrical characteristics as the active electrode E1such that the EMI antenna657is configured to provide substantially the same common mode interference as experienced at the active electrode E1. The EMI antenna657may be configured as a “Kelvin connect” electrode for use during AEPD. It should be recognized that the EMI antenna657may not “perfectly” cancel out the interference caused by EMI events. However, the EMI antenna657will assist in minimizing the deleterious effects of EMI artifacts for DBS therapy applications which utilize the Case as a stimulation electrode, commonly referred to as a monopolar stimulation configuration. Any remaining minor stimulation interference during AEPD caused by mismatched EMI artifacts on different electrodes is typically within tolerance ranges of conventional systems, without the need for the burdensome bipolar configuration IPG programming solutions that are commonly necessary to maintain DBS therapy delivery during an MRI scan. In the example ofFIG.6B, the IPG is exposed to MRI and/or EMI interference, and accordingly, the schematic diagram also models MRI/EMI interference at electrode E1, and at the EMI antenna657of the IPG. An interference source623is modeled as a voltage source that is introduced at the EMI antenna657when the EMI antenna657is exposed to MRI/EMI interference. An interference source632is similarly modeled as a voltage source or a current source that is introduced at the active electrode E1when the active electrode E1is exposed to the MRI/EMI interference. The magnitude of the voltage introduced by the interference sources623and632fluctuates over time in a substantially similar manner (although not identically) at the active electrode E1and the EMI antenna657. The EMI antenna657and active electrode E1may exhibit certain similar capacitive and resistive characteristics that are also modeled as shown inFIG.6B. For example, the active electrode E1may exhibit a resistance628of RL1and a capacitance630of 0.1 uF. The EMI antenna657may exhibit a similar resistance and a similar capacitance. Optionally, the EMI antenna657may be implemented as an “inactive” or unused electrode. E0represents electrodes that are not used for stimulation in connection with any type of therapy for the present patient. The electrode E0may also be referred to as a non-stimulation electrode as no therapy is delivered through the electrode E0. The inactive electrode E0is configured to deliver feedback to the error amplifier620(e.g., an operational amplifier) regarding MRI/EMI interference experienced at the inactive electrode E0. The feedback may be provided in the form of a voltage fluctuation over time that is caused by the interference. The inactive electrode E0has substantially similar dimensions and characteristics as the active electrode E1such that the inactive electrode E0is configured to provide substantially the same common mode interference as experienced at the active electrode E1. The inactive electrode E0may be configured as a “Kelvin connect” electrode for use during AEPD. It should be recognized that the Kelvin connect electrode may not “perfectly” cancel out the interference caused by EMI events. However, the Kelvin connect electrode will assist in minimizing the deleterious effects of EMI artifacts for DBS therapy applications which utilize the Case as a stimulation electrode, commonly referred to as a monopolar stimulation configuration. Any remaining minor stimulation interference during AEPD caused by mismatched EMI artifacts on different electrodes is typically within tolerance ranges of conventional NS systems, without the need for the burdensome bipolar configuration IPG programming solutions that are commonly necessary to maintain DBS therapy delivery during an MRI scan. Optionally, when a non-electrode wire is used as the EMI antenna, the wire may also be configured to have similar capacitive and resistive characteristics as the active electrodes E1, E2, such that resistances are substantially similar and the capacitances are substantially similar. In the example ofFIG.6B, the EMI antenna657is implemented as a non-electrode wire that is connected between the Case node and the positive input terminal of the error amplifier620through the high voltage level shift capacitor616. The feedback control loop642extends in series through the interference source623and high-voltage level shift capacitor616. Alternatively, if the EMI antenna657is implemented as an inactive electrode (e.g. E0) the feedback loop control642may be connected between the COMMON node640and the high-voltage level shift capacitor616. The feedback control loop is largely immune to electrical characteristics of the active electrode and the EMI antenna. FIG.6Balso illustrates a DC blocking capacitor626that is modeled in series with the case electrode and a COMMON node640. The DC blocking capacitor626is configured to prevent DC current flow through the Case electrode. During the AEPD operation, as charge is drawn from the Case electrode, a current iAEPDflows from the Case electrode to ground through the transistor622and variable resistor624. The magnitude of the discharge current iAEPDvaries over the duration of the AEPD operation in a manner that is controlled by the transistor622. The active electrode E1is coupled to a voltage multiplier634which is also used during a therapy phase in which stimulation is delivered, during which the voltage multiplier634delivers a multiplied voltage VM. The VM voltage may be different during the stimulation therapy and discharge phases. FIG.7illustrates a schematic diagram of a portion of the NS system when switched to the stimulation mode. In the embodiment ofFIG.7, the EMI antenna is implemented as an inactive electrode Ea Once the discharge operation is completed, the NS system switches back to the stimulation mode in which the next stimulus in this therapy may be delivered. InFIG.7, inputs to the error amplifier620are reconfigured for stimulation. The first/positive input is coupled to a digital to analog converter (DAC)710that is managed by the control circuit to deliver a series of reference voltages corresponding to the NS therapy. The second/negative input of the error amplifier620is connected to node712which has the same voltage as the source terminal714of the transistor622. The node712is located at the connection of the source terminal714of the transistor622and the variable resistor624. The active electrode E1is connected as a cathode to the drain terminal716of the transistor622, while the electrode CASE is connected as an anode to a voltage multiplier718. The EMI antenna657, shown inFIG.7implemented as an inactive electrode E0, is disconnected from the error amplifier620and is allowed to electrically float, but the EMI antenna657can be connected to other circuitry (not shown) for monitoring the interference voltages caused by MRI/EMI. The voltage sources, capacitances and resistances illustrated between the electrode E1, CASE, EMI antenna, and the node denoted COMMON are merely presented to model the electrical characteristics experienced at the electrode E1, CASE, EMI antenna and COMMON node during a stimulation operation. FIG.8Aillustrates an example for voltage potentials experienced at select points in an NS system formed in accordance with embodiments herein before, during and after delivery of an NS therapy in the presence of EMI. The example ofFIG.8Aassumes that the EMI antenna is implemented as an inactive electrode Ea By way of example, the timing diagram plots voltage along the vertical axis at various points within the NS system and time along the horizontal axis. In the timing diagram, the stimulation time period806corresponds to an interval in which a stimulation current pulse802is delivered as part of an NS therapy, while the discharge intervals804and808corresponded to time periods before and after the stimulation current pulse802. In the present example, the stimulation current pulse is approximately 90 μs in duration. FIG.8Billustrates an enlarged view of the voltage potentials within the portion ofFIG.8Aimmediately before, during and immediately after delivery of the stimulation pulse802. The NS system operates with internal circuitry in the therapy delivery configuration ofFIG.7during the stimulation time period806and operates with the discharge configuration ofFIG.6Bduring the discharge intervals804,808. The NS system alternately operates in the configurations as shown inFIGS.6B and7when it changes between the stimulation interval and discharge intervals. With reference toFIGS.6B and8A, during the discharge intervals804and808, voltage source634is fixed at a predetermined voltage and thus maintains the predetermined voltage at electrode E1. In the present example, the voltage source634is set to a voltage of 3.6 V and therefore a voltage of approximately 3.6 V is maintained at E1(as denoted by reference voltage812) throughout the discharge interval/operation. The interference sources610and632represent models of voltages introduced by the EMI to cause an EMI-induced voltage810to vary in a sinusoidal manner at the nodes denoted COMMON, CASE and CATHODE during the discharge intervals804and808. The COMMON node640and CASE node are on opposite sides of DC blocking capacitor626, while the CATHODE node corresponds to the drain terminal of the MOSFET transistor622during stimulation. Outside of stimulation, the EMI causes the voltage at the COMMON node640to shift upward and downward to follow the EMI-induced voltage810, centered at the reference voltage812. Similarly, outside of stimulation, the voltage at the CASE electrode/node moves up and downward to follow the EMI-induced voltage810, centered at the reference voltage812. During discharge, electrode E0also maintains a voltage substantially corresponding to the reference voltage at electrode E1, as defined by the voltage source634. Except for a short time period immediately after stimulation when the discharge current first starts to flow, electrodes E0and E1are substantially maintained at the same voltage, corresponding to the voltage source634, given that the electrodes E0and E1experience substantially similar interference voltages (e.g., in phase, frequency and amplitude) which are induced by the EMI, as denoted by substantially similar interference sources610,632. To better understand why the nodes E0and E1substantially maintain the same voltage, consider the following example. The voltage source634defines the voltage at E1. EMI creates a voltage interference, modeled as the interference source632, which causes the voltage at the COMMON node640to move up and down. The voltage at node E0does not move up and down with the voltage at the COMMON node640, because the EMI also creates a voltage interference modeled as the interference source610. The interference source610effectively cancels out the voltage fluctuations at the COMMON node640. It should be noted that the interference sources610and632are oriented such that a common polarity (e.g., the positive polarity) of both is directed towards the COMMON node640, while the opposite interference source polarity (e.g., the negative polarity) is directed toward nodes E0and E1. By utilizing the unused electrode E0for feedback to the positive input of error amplifier620during the discharge operation, the unused electrode E0substantially cancels out EMI induced current flow during the discharge phase. Continuing with the signal diagrams ofFIG.8A, a voltage814during discharge phase804is maintained at nodes denoted VFDBK (e.g., feedback voltage), VDAC and VSCALE immediately prior to stimulation. The VFDBK node corresponds to the positive input terminal of the error amplifier620, the VDAC node corresponds to the output of the voltage reference source618, and the VSCALE node corresponds to the voltage across the resistor624. During discharge phase808immediately after stimulation, the voltages at nodes VDAC and VFDBK remain substantially the same as during the discharge phase804, but node VSCALE exhibits an exponentially decreasing voltage characteristic which is consonant with the flow of the intended AEPD current. Turning toFIG.8B, the voltage signals during the discharge intervals804and808correspond to the signals described in connection withFIG.8A. During a stimulation time period806, the voltages at the various nodes (e.g., CASE, COMMON, E0, E1, CATHODE, VFDBK, VSCALE, and VDAC) differ as noted inFIG.8B. During delivery of a stimulation pulse802, the COMMON node640is substantially held to a constant voltage noted at816. At the beginning of a stimulation pulse, the electrode E0has a voltage substantially similar to the voltage at the COMMON node640but decreases over time along an inverse ramp818over the duration of the stimulation pulse. By way of example, the rate at which the voltage at E0decreases may generally represent an inverse of a rate of increase in the EMI-induced voltage. During the stimulation phase806, the voltage at node E1begins at a voltage level below the voltage at node COMMON which corresponds to the IR voltage drop across E1electrode resistance RL1, then the voltage at node E1decreases over the course of the stimulation pulse as current is delivered (due to the build-up of charge on the 0.1 uF electrode/tissue interface capacitance associated with electrode E1). The methods and systems described herein are utilized generally in connection with monopolar stimulation techniques, in which the IPG CASE is utilized as an electrode. Closing It may be noted that the various embodiments may be implemented in hardware, software or a combination thereof. The various embodiments and/or components, for example, the modules, or components and controllers therein, also may be implemented as part of one or more computers or processors. The computer or processor may include a computing device, an input device, a display unit and an interface, for example, for accessing the Internet. The computer or processor may include a microprocessor. The microprocessor may be connected to a communication bus. The computer or processor may also include a memory. The memory may include Random Access Memory (RAM) and Read Only Memory (ROM). The computer or processor further may include a storage device, which may be a hard disk drive or a removable storage drive such as a solid-state drive, optical disk drive, and the like. The storage device may also be other similar means for loading computer programs or other instructions into the computer or processor. As used herein, the term “computer,” “subsystem,” “controller circuit,” “circuit,” or “module” may include any processor-based or microprocessor-based system including systems using microcontrollers, reduced instruction set computers (RISC), ASICs, logic circuits, and any other circuit or processor capable of executing the functions described herein. The above examples are exemplary only, and are thus not intended to limit in any way the definition and/or meaning of the term “controller circuit”. The computer, subsystem, controller circuit, circuit execute a set of instructions that are stored in one or more storage elements, in order to process input data. The storage elements may also store data or other information as desired or needed. The storage element may be in the form of an information source or a physical memory element within a processing machine. The set of instructions may include various commands that instruct the computer, subsystem, controller circuit, and/or circuit to perform specific operations such as the methods and processes of the various embodiments. The set of instructions may be in the form of a software program. The software may be in various forms such as system software or application software and which may be embodied as a tangible and non-transitory computer readable medium. Further, the software may be in the form of a collection of separate programs or modules, a program module within a larger program or a portion of a program module. The software also may include modular programming in the form of object-oriented programming. The processing of input data by the processing machine may be in response to operator commands, or in response to results of previous processing, or in response to a request made by another processing machine. As used herein, the terms “software” and “firmware” are interchangeable, and include any computer program stored in memory for execution by a computer, including RAM memory, ROM memory, EPROM memory, EEPROM memory, and non-volatile RAM (NVRAM) memory. The above memory types are exemplary only, and are thus not limiting as to the types of memory usable for storage of a computer program. It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-described embodiments (and/or aspects thereof) may be used in combination with each other. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from its scope. While the dimensions, types of materials and coatings described herein are intended to define the parameters of the invention, they are by no means limiting and are exemplary embodiments. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects. Further, the limitations of the following claims are not written in means—plus-function format and are not intended to be interpreted based on 35 U.S.C. § 112(f), unless and until such claim limitations expressly use the phrase “means for” followed by a statement of function void of further structure.
69,164
11857790
Like reference characters refer to like elements throughout the figures and description. DETAILED DESCRIPTION Systems, devices, and techniques for modulating electrical stimulation delivered to a patient are described. Specifically, a stimulation generator may generate and deliver via a medical lead, different phases of electrical stimulation to a patient. Such stimulation may be directed to certain regions of the brain of the patient during Deep Brain Stimulation (DBS), but similar stimulation may also be delivered as spinal cord stimulation (SCS), pelvic stimulation, peripheral nerve stimulation, muscle stimulation, etc., in other examples. In general, DB S therapy may involve monolithic electrical therapy that follows preset stimulation parameters, such as preset frequencies, amplitudes, or pulse width parameters. In this manner, a single pulse frequency, pulse width, and amplitude may define pulses delivered in an open loop configuration. Such electrical stimulation, however, may not provide therapy in an efficient manner, for example, because the delivery of such electrical stimulation is often applied indiscriminately to certain regions of the brain and without considering patient specific conditions, conservation of electrical power, stimulation time, etc. before achieving a sought after therapeutic result. In addition, such electrical stimulation may take longer to achieve a therapeutic result if the electrical activity of the brain does not have uniform or known parameters. As such, the electrical stimulation may be less effective for the patient, consume more electrical power, or otherwise reduce overall system performance in treating the patient. In other examples, electrical stimulators may use bursts of electrical energy (e.g., bursts of pulses) in an effort to influence electrical activity into exhibiting a particular behavior. Such bursts of electrical stimulation, however, may be applied indiscriminately in such a way that the electrical stimulation therapy may not effectively decouple and target smaller, more local activation regions within an entrained volume during brief sessions of electrical stimulation. In this way, such electrical stimulation may also consume extraneous electrical power in the process of providing therapy. The aforementioned issues, among others, may be addressed by the disclosed electrical stimulation modulation techniques by delivering entrainment stimulation pulses (e.g., priming phase pulses), followed by, or in parallel with, one or more desynchronization stimulation pulse(s) (e.g., a set of desynchronization phase pulses). Specifically, the stimulation generator may alternate between entrainment stimulation pulses configured to place at least a portion of the brain into a known electrical state and desynchronization stimulation pulses configured to disrupt at least a portion of the entrained electrical activity. In an example involving DBS, the stimulation generator may generate and deliver electrical stimulation therapy that includes delivery of priming phase pulses delivered at a first frequency and delivery of desynchronization phase pulse(s) delivered at a second frequency. In such examples, the entrainment stimulation pulses delivered by the stimulation generator may first cause entrainment of a volume of electrical activity, such as neuronal activity, using entrainment stimulation parameters. Subsequently, the stimulation generate may generate and deliver the desynchronization pulse(s) to target specific portions (e.g., neuronal subpopulations or a smaller volume) of the entrained population of neurons. Generally, entrainment occurs when the frequency of a bioelectrical signal aligns with an input frequency, such as the input frequency of electrical stimulation or of another stimuli (e.g., audible stimuli, etc.). For example, entrained bioelectrical signals may align with a temporal structure of the input stimuli. For example, the entrained bioelectrical signals may begin to transmit at a rhythm or frequency that matches or is approximately equal to a rhythm or frequency of the input stimuli. In another example of alignment, the entrained bioelectrical signals may synchronize with the input stimuli, such that the entrained bioelectrical signals transmit at a rhythm or frequency that is synchrony with the input stimuli without necessarily matching or equaling the rhythm or frequency of the input stimuli. For example, the entrained bioelectrical signals may transmit out of phase with the input stimuli or at a different rate while aligning or maintaining alignment with a temporal structure of the input stimuli (e.g., frequency, rhythm, etc.). In the case of electrical stimulation, entrainment of bioelectrical signals, such as those in the brain, may occur when the waveform frequency of a bioelectrical signal aligns, or at least begins to align, with the frequency oscillation of the electrical stimulation. That is, entrainment generally refers to the phase alignment of brain oscillations with an external stimuli. In some instances, the entrainment stimulation pulses may also provide therapy to the patient, such as alpha, delta, theta, beta, and/or gamma entrainment therapy, whereas in other examples, the entrainment stimulation pulses are not configured to provide therapy. In any case, the entrainment stimulation pulses are delivered according to parameters configured to at least entrain electrical activity (e.g., neuronal activity, cellular activity, etc.) in the patient. The stimulation generator may then generate and deliver desynchronization stimulation pulse(s) according to a set of stimulation parameters that is at least partially different from the set of stimulation parameters that defines the entrainment stimulation pulses (e.g., at least one of a different frequency, different electrode configuration, different amplitude, different pulse width, etc.). For example, the frequency of the desynchronization stimulation pulse(s) may be higher than that of the frequency of the entrainment stimulation pulses that entrained the electrical activity in the patient. In addition, or alternatively, the amplitude of the desynchronization stimulation pulse(s) may be lower than the amplitude of the entrainment stimulation pulses. As such, the electrical stimulation of the desynchronization pulse(s) may be configured to recruit a smaller and/or more local volume of activation (VOA) relative to a VOA of the entrained electrical activity. In this way, the stimulation generator may provide therapeutic pulses configured to disrupt specific portions of the entrained electrical activity. This disruption of the entrained electrical activity may promote a reduction of symptoms related to movement disorders, such as a reduction in tremor. In some examples, the stimulation generator may generate and deliver the desynchronization stimulation pulse(s) to one or more of the same electrodes selected to deliver the entrainment stimulation. In another example, the stimulation generator may generate and deliver the desynchronization stimulation pulse(s) to one or more different electrodes, or in some instances, to only some of the same electrodes used to deliver the entrainment stimulation, along with a different combination of electrodes as well. The desynchronization stimulation pulse(s) may be used to disrupt the electrical activity entrained by the entrainment stimulation pulses, or at least a portion of the entrained electrical activity, in order to provide therapy to the patient, such as for a patient suffering from a neurological disorders (e.g., Parkinson's disease, essential tremor, epilepsy, etc.). For example, the desynchronization stimulation pulse(s) may be used to destructively interfere with the entrained electrical activity. In any case, providing electrical stimulation in accordance with the various techniques of this disclosure, may allow a medical device to provide improved electrical stimulation therapy by first entraining the electrical activity and then disrupting the entrained activity for therapeutic purposes, such as to destroy patient-specific network synchrony and/or to allow for the natural evolution of neuro-population resynchronization to occur. For example, the stimulation generator may provide desynchronization pulse(s) that cause a local neuronal region to spatiotemporally decouple from a larger network recruited through delivery of the entrainment stimulation pulses, at which point, the stimulation generator may continue to provide patient-tailored desynchronization pulse(s) and/or entrainment pulses throughout the duration of a given therapy session. In some examples, stimulation parameters for either the entrainment stimulation pulses and/or the desynchronization stimulation pulse(s) may be based on patient-specific biomarkers, such as measured frequencies in the local field potential (LFP) of the patient associated with symptoms (e.g., tremor), electrophysiological markers, physical patient movement sensors (e.g., one or more accelerometers), other electrical brain signals, etc. These biomarkers may thus be indicative of features, characteristics, or other aspects of a physiological signal sensed by one or more devices. In some instances, the physiological signals and/or patient-specific biomarkers may be received from an external device, such as a wearable device. For example, processing circuitry, such as that of the stimulation generator, may determine stimulation parameters for the stimulation pulses from sensed physiological signals, biomarkers of the patient, and/or from the specifically targeted region of the patient. The processing circuitry may then adjust one or more stimulation parameters (e.g., adjust a value of one or more respective stimulation parameters) accordingly to deliver the stimulation pulse(s). The stimulation generator may use the biomarkers to establish initial stimulation parameters with which stimulation can proceed in an open loop configuration. The processing circuitry of the stimulation generator may tailor the alternating pulse pattern (e.g., pulse frequency and/or duration), and transitions between pulse patterns, based on the patient-specific biomarkers, but the processing circuitry may not necessarily modulate those parameters during stimulation based on the biomarkers or any other feedback indicating efficacy of the stimulation. In some examples, the stimulation generator may interleave a rest phase between the entrainment stimulation pulses and the desynchronization pulses. In such examples, parameters for the rest phase (e.g., duration of the rest phase) may also be based on one or more biomarkers of the patient. In a closed loop configuration, the system may additionally or alternatively tailor at least one stimulation parameter that at least partially defines the entrainment stimulation pulses, desynchronization stimulation pulses, and/or any other aspect of the pattern of pulses based on one or more biomarkers and/or feedback received regarding the efficacy of the stimulation pulses. The biomarkers may be associated with brain signals, movement sensors, or any other physiological signal. As another example, the feedback may be based on an indication as to the effectiveness of the desynchronization pulses in activating specific regions of the brain or decoupling specific VOAs from the entrained VOA. Feedback may be achieved through the use of a stimulation lead that is also capable of providing sensing capabilities. In some examples, electrical stimulation may be delivered by a medical device to the brain of the patient to manage or otherwise treat one or more symptoms of a patient disorder. The brain of the patient may exhibit brain signals across a broad frequency spectrum. However, in some examples, oscillation of bioelectrical brain signals at a particular frequency or in a frequency band or range may be associated with one or more symptoms or brain states of a patient disorder. An example brain state may include a sleep state of a patient. For example, bioelectrical brain signals oscillating in a particular frequency range may be associated with one or more symptoms of a patient disorder in the sense that such symptoms frequently occur or manifest themselves when the bioelectrical brain signals oscillate at such a frequency range. Such occurrences may be a result of the brain signal oscillations within one or more regions of the brain of a patient interfering with the normal function of that region of the brain. As used herein, a frequency or range of frequencies may be referred to as a pathological frequency or pathological frequency range when oscillations of brain signals at such frequency or frequencies are associated in such a manner with one or more symptoms of a patient disorder. Similarly, bioelectrical brain signals oscillating at one or more pathological frequencies may be referred to as pathological brain signals. As one example, in the case of Parkinson's disease, beta frequency oscillations (e.g., between approximately 13 Hertz to approximately 30 Hertz) in the subthalamic nucleus (STN), globus pallidus interna (GPi), globus pallidus externa (GPe), and/or other areas of the basal ganglia may be associated with one or more motor symptoms including, e.g., rigidity, akinesia, bradykinesia, dyskinesia, and/or resting tremor. In the case of epilepsy, beta frequency oscillations may occur within one or more sites within the Circuit of Papez, including, e.g., anterior nucleus, internal capsule, cingulate, entorhinal cortex, hippocampus, fornix, mammillary bodies, or mammillothalamic tract (MMT). These motor symptoms may be associated with bioelectrical brain signals oscillating in the beta frequency range in the sense that such symptoms frequently occur when the bioelectrical brain signals oscillate within the beta frequency range. For example, persistence of high amplitude, long duration oscillation in the beta frequency range may result in oscillatory “interference” with normal low amplitude, short duration beta oscillations within the brain. Such interference may limit the normal functions of the above-mentioned regions of the brain. The high amplitude, long duration oscillations of the bioelectrical brain signals may be at a lower frequency than other higher frequency intrinsic signals within the bioelectrical brain signals. Networks of oscillating signals in neurons may be synchronized by electrical and chemical signals that cause the activity of the network to phase lock and resonate at some frequency. In some examples, the symptoms of Parkinson's disease or epilepsy may generally manifest themselves in conjunction with the presence of high amplitude, long duration beta frequency range oscillations. In some examples, the frequency of symptom manifestations may increase in conjunction with the presence of high amplitude, long duration beta frequency range oscillations. In further examples, gamma oscillations (e.g., oscillations comprising a frequency of about 35 Hertz to 200 Hertz) may occur in the hippocampus. Such gamma oscillations may also be associated with one or more symptoms of a patient disorder. In further examples, other high frequency oscillations comprising a frequency within a range of 100 Hertz to 500 Hertz may be associated with one or more symptoms of a patient disorder. As described herein, the desynchronization stimulation pulses delivered with different pulse frequency, pulse width, and/or amplitude may disrupt the entrained electrical signal and the oscillations associated with patient symptoms. FIG.1is a conceptual diagram illustrating an example therapy system10in accordance with examples of the disclosure. InFIG.1, example therapy system10may deliver electrical stimulation therapy to treat or otherwise manage a patient condition, such as, e.g., a movement disorder of patient12. One example of a movement disorder treated by the delivery of DBS via system10may include Parkinson's disease or epilepsy. Patient12ordinarily will be a human patient. In some cases, however, therapy system10may be applied to other mammalian or non-mammalian non-human patients. For ease of illustration, examples of the disclosure will primarily be described with regard to the treatment of movement disorders and, in particular, the treatment of Parkinson's disease, e.g., by reducing or preventing the manifestation of symptoms exhibited by patients suffering from Parkinson's disease. As noted above, such symptoms may include rigidity, akinesia, bradykinesia, dyskinesia, and/or resting tremor. However, the treatment of one or more patient disorders other than that of Parkinson's disease by employing the techniques described herein is contemplated. For example, the described techniques may be employed to manage or otherwise treat symptoms of other patient disorders, such as, but not limited to, epilepsy, psychological disorders, mood disorders, seizure disorders or other neurogenerative impairment. In one example, such techniques may be employed to provide therapy to patient to manage Alzheimer's disease. Therapy system10includes medical device programmer14, implantable medical device (IMD)16, lead extension18, and one or more leads20A and20B (collectively “leads20”) with respective sets of electrodes24,26. IMD16includes stimulation therapy circuitry that includes a stimulation generator that generates and delivers electrical stimulation therapy to one or more regions of brain28of patient12via a subset of electrodes24,26of leads20A and20B, respectively. In the example shown inFIG.1, therapy system10may be referred to as a deep brain stimulation (DBS) system because IMD16provides electrical stimulation therapy directly to tissue within brain28, e.g., a tissue site under the dura mater of brain28. In other examples, leads20may be positioned to deliver therapy to a surface of brain28(e.g., the cortical surface of brain28). In some examples, delivery of stimulation to one or more regions of brain28, such as an anterior nucleus (AN), thalamus or cortex of brain28, provides an effective treatment to manage a disorder of patient12. In some examples, IMD16may provide cortical stimulation therapy to patient12, e.g., by delivering electrical stimulation to one or more tissue sites in the cortex of brain28. In cases in which IMD16delivers electrical stimulation to brain28to treat Parkinson's disease by disrupting entrained brain signals, target stimulation sites may include one or more basal ganglia sites, including, e.g., subthalamic nucleus (STN), globus pallidus interna (GPi), globus pallidus externa (GPe), pedunculopontine nucleus (PPN), thalamus, substantia nigra pars reticulata (SNr), internal capsule, and/or motor cortex. In cases in which IMD16delivers electrical stimulation to brain28to treat epilepsy by disrupting entrained brain signals, target stimulation sites may include one or more sites within the Circuit of Papez, including, e.g., anterior nucleus, internal capsule, cingulate, entorhinal cortex, hippocampus, fornix, mammillary bodies, or MMT. Brain signals with oscillations in the beta frequency range may be considered pathological brain signals. As will be described below, IMD16may deliver electrical stimulation pulses configured to entrain electrical activity and then disrupt the entrained electrical activity based on the frequency of the pathological brain signal. In an illustrative example involving a particular disease (e.g., Parkinson's disease), IMD16may entrain, via DBS, brain oscillations at 130 Hertz stimulation (e.g., F_stim). The entrained brain oscillation may be found in the STN and Cortex of patient12. As such, the entrained brain oscillations in patient12may be observed in the STN and Cortex at half of the entrainment stimulation frequency (e.g., 65 Hertz in this particular example). In such instances, IMD16may then disrupt the entrained brain oscillations (e.g., the entrained oscillations comprising half of the F_stim) by delivering a set of desynchronization pulses as described herein. In another example, the pathological frequency range is a beta frequency range of about 11 Hertz to about 35 Hertz. For examples in which IMD16senses the bioelectrical brain signals at one or more sites of brain28to receive feedback from the patient and/or tailor the stimulation pulses based on patient-specific biomarkers, the target stimulation site(s) for electrical stimulation delivered to brain28of patient12may be the same and/or different than the sensing site. In the example shown inFIG.1, IMD16may be implanted within a subcutaneous pocket above the clavicle of patient12. In other examples, IMD16may be implanted within other regions of patient12, such as a subcutaneous pocket in the abdomen or buttocks of patient12or proximate the cranium of patient12. Implanted lead extension18is coupled to IMD16via connector block30. In some examples, electrical contacts may electrically couple the electrodes24,26carried by leads20to IMD16. Lead extension18traverses from the implant site of IMD16and through the cranium of patient12to access brain28. IMD16may comprise a hermetic housing17to substantially enclose components, such as a processing circuitry, sensing circuitry, memory, etc. Leads20A and20B may be implanted within the right and left hemispheres, respectively, of brain28in order deliver electrical stimulation to one or more regions of brain28, which may be selected based on many factors, such as the type of patient condition for which therapy system10is implemented to manage. Other implant sites for leads20and IMD16are contemplated. For example, IMD16may be implanted on or within cranium32or leads20may be implanted within the same hemisphere or IMD16may be coupled to a single lead that is implanted in one or both hemispheres of brain28. Leads20may be positioned to deliver electrical stimulation to one or more target tissue sites within brain28to manage patient symptoms associated with a disorder of patient12. Leads20may be implanted to position electrodes24,26at desired locations of brain28through respective holes in cranium32. Leads20may be placed at any location within brain28such that electrodes24,26are capable of providing electrical stimulation to target tissue sites within brain28during treatment. For example, in the case of Parkinson's disease, for example, leads20may be implanted to deliver electrical stimulation to one or more basal ganglia sites, including, e.g., subthalamic nucleus (STN), globus pallidus interna (GPi), globus pallidus externa (GPe), pedunculopontine nucleus (PPN), thalamus, substantia nigra pars reticulata (SNr), internal capsule, and/or motor cortex. As another example, in the case of epilepsy, for example, leads20may be implanted to deliver electrical stimulation to one or more sites within the Circuit of Papez, including, e.g., anterior nucleus, internal capsule, cingulate, entorhinal cortex, hippocampus, fornix, mammillary bodies, or MMT. Although leads20are shown inFIG.1as being coupled to a common lead extension18, in other examples, leads20may be coupled to IMD16via separate lead extensions or directly coupled to IMD16. Moreover, althoughFIG.1illustrates system10as including two leads20A and20B coupled to IMD16via lead extension18, in some examples, system10may include one lead or more than two leads. Leads20may deliver electrical stimulation to treat any number of neurological disorders or diseases in addition to movement disorders, such as seizure disorders or psychiatric disorders. Examples of movement disorders include a reduction in muscle control, motion impairment or other movement problems, such as rigidity, bradykinesia, rhythmic hyperkinesia, nonrhythmic hyperkinesia, dystonia, tremor, and akinesia. Movement disorders may be associated with patient disease states, such as Parkinson's disease, Huntington's disease, or epilepsy. Examples of psychiatric disorders include MDD, bipolar disorder, anxiety disorders, post-traumatic stress disorder, dysthymic disorder, and OCD. As described above, while examples of the disclosure are primarily described with regard to treating Parkinson's disease, treatment of other patient disorders via delivery of electrical stimulation to patient12is contemplated. Leads20may be implanted within a desired location of brain28via any suitable technique, such as through respective burr holes or through a common burr hole in the cranium32of patient12. Leads20may be placed at any location within brain28such that electrodes24,26of leads20are capable of providing electrical stimulation to targeted tissue during treatment. Electrical stimulation generated from the stimulation generator (not shown) of IMD16may help prevent the onset of events associated with the patient's disorder or mitigate symptoms of the disorder. For example, a first electrical stimulation pulse train delivered by IMD16to brain28may have a frequency (and/or other stimulation parameter values) configured to entrain electrical activity, whereas a second desynchronization electrical stimulation pulse or pulse train may have a frequency configured to disrupt the entrained bioelectrical brain signals. In the examples shown inFIG.1, electrodes24,26of leads20are shown as ring electrodes. Ring electrodes may deliver an electrical field to any tissue adjacent to leads20. In other examples, electrodes24,26of leads20may have different configurations. For example, electrodes24,26of leads20may have a complex electrode array geometry that is capable of producing shaped electrical fields. The complex electrode array geometry may include multiple electrodes (e.g., partial ring or segmented electrodes) around the perimeter of each lead20, rather than a ring electrode. In this manner, electrical stimulation may be directed to a specific direction from leads20to enhance therapy efficacy, target specific pathological regions using the desynchronization pulse(s), and/or target specific regions for entrainment using the entrainment stimulation pulses. In some examples, a lead may include one or more ring electrodes together with one or more rings of segmented electrodes. In addition, housing17of IMD16may include one or more stimulation and/or sensing electrodes. Furthermore, leads20may be paddle leads, spherical leads, cylindrical leads, bendable leads, or any other type of shape effective in treating patient12. IMD16may generate and/or deliver electrical stimulation therapy to brain28of patient12according to one or more stimulation parameters or parameter values that define entrainment stimulation pulses and one or more stimulation parameters or parameter values that define desynchronization stimulation pulse(s). Where IMD16delivers electrical stimulation in the form of electrical pulses, for example, the stimulation may be characterized by selected pulse parameters, such as pulse amplitude, pulse rate or frequency, pulse width, or number of pulses. Where IMD16delivers electrical stimulation in the form of a sinusoidal wave, for example, the stimulation may be characterized by selected sinusoidal parameters, such as amplitude or cycle frequency. In some examples as used herein, a “stimulation pulse” or pulse(s) may generally refer to either a digital signal that causes an analog waveform, such as the aforementioned sinusoidal wave, or may refer to electrical pulses, depending on the context. In some examples, when different electrodes are available for delivery of stimulation, the program may be further characterized by different electrode combinations, which can include selected electrodes and their respective polarities. The exact parameter values of the electrical stimulation may be specific for the particular target stimulation site (e.g., the region of the brain) involved, as well as the particular patient and patient condition. In addition to delivering electrical stimulation to manage a disorder of patient12, therapy system10monitors one or more bioelectrical brain signals of patient12. For example, IMD16may include sensing circuitry that senses bioelectrical brain signals within one or more regions of brain28. In the example shown inFIG.1, the signals generated by electrodes24,26are conducted to IMD16via conductors. As described in further detail below, in some examples, processing circuitry of IMD16may sense the bioelectrical signals within brain28of patient12and control generation or delivery of entrainment stimulation pulses and desynchronization pulse(s) to brain28via electrodes24,26. In some examples, the sensing circuitry of IMD16may receive the bioelectrical signals from electrodes24,26or other electrodes positioned to monitor brain signals of patient12. Electrodes24,26may also be used to deliver electrical stimulation to target sites within brain28as well as sense brain signals within brain28. However, IMD16can also use separate sensing electrodes to sense the bioelectrical brain signals. In some examples, the sensing circuitry of IMD16may sense bioelectrical brain signals via one or more of the electrodes24,26that are also used to deliver electrical stimulation to brain28. In other examples, one or more of electrodes24,26may be used to sense bioelectrical brain signals while one or more different electrodes24,26may be used to deliver electrical stimulation. In another example, system10may include an external device34, such as a wearable device (not shown) or external monitoring device, that receives patient-specific biomarkers, or in some cases, bioelectrical signals, from patient12. External device34may then transmit biomarker information and/or bioelectrical signal information, via telemetry circuitry, to programmer14, IMD16, and/or an external server (not shown) for further processing. As such, IMD16, external device34, and programmer14may interface with an external server via a network connection in order to perform the various techniques of this disclosure. In any case, the external device may include a wearable device worn on the wrist or ankle of patient12, a headpiece or earpiece worn on or proximate the head of patient12, a portable or mobile device configured to obtain biomarkers, or other external sensor devices (e.g., a smart phone having a sensor), etc. Depending on the particular stimulation electrodes and sense electrodes used by IMD16, IMD16may monitor brain signals and deliver electrical stimulation toward the same region of brain28or different regions of brain28. In some examples, the electrodes used to sense bioelectrical brain signals may be located on the same lead used to deliver electrical stimulation, while in other examples, the electrodes used to sense bioelectrical brain signals may be located on a different lead than the electrodes used to deliver electrical stimulation. In some examples, a brain signal of patient12may be monitored with external electrodes, e.g., scalp electrodes of external device34. Moreover, in some examples, the sensing circuitry that senses bioelectrical brain signals of brain28(e.g., the sensing circuitry that generates an electrical signal indicative of the activity within brain28) is in a physically separate housing from housing17of IMD16. However, in the example shown inFIG.1and the example primarily referred to herein for ease of description, the various circuitry of IMD16are enclosed within a common outer housing17. The physiological signals (e.g., bioelectrical brain signals) monitored by IMD16may reflect changes in electrical current produced by the sum of electrical potential differences across brain tissue. Examples of the monitored bioelectrical brain signals include, but are not limited to, an electroencephalogram (EEG) signal, an electrocorticogram (ECoG) signal, a local field potential (LFP) sensed from within one or more regions of brain28of patient12, action potentials from single cells within the patient's brain, and/or Microelectrode recording (MER) of single cells within brain28of patient12. These example bioelectrical brain signals, among others signals, may be used to identify one or more biomarkers. For example, processing circuitry40may identify one or more biomarkers within a raw or, in some cases, a filtered bioelectrical signal, physiological signal, etc. The one or more biomarkers may then be used to determine parameters of the entrainment stimulation pulses and/or the desynchronization stimulation pulse(s). In one example, processing circuitry40may identify a frequency of a physiological signal as a biomarker indicative of a disease or other problem of patient12. That is, biomarkers may include features or characteristics that processing circuitry, such as that of IMD16, may identify from one or more physiological signals. Programmer14wirelessly communicates with IMD16as needed to provide or retrieve electrical stimulation information. Programmer14is an external computing device that the user, e.g., the clinician and/or patient12, may use to communicate with IMD16. For example, programmer14may be a clinician programmer that the clinician uses to communicate with IMD16and program one or more electrical stimulation programs for IMD16. In some examples, programmer14may be a patient programmer that allows patient12to select programs and/or view and modify electrical stimulation parameters. Programmer14may be a hand-held computing device with a display viewable by the user and an interface for providing input to programmer14(i.e., a user input mechanism). For example, programmer14may include a display screen (e.g., a touch screen display) that presents information to the user. In addition, programmer14may include a touch screen, keypad, buttons, a peripheral pointing device or another input mechanism that allows the user to navigate through the user interface of programmer14and provide input. In some examples, programmer14may be configured to obtain physiological signals from patient12and identify one or more biomarkers from the physiological signal and for patient12, similar to how IMD16may obtain physiological signals from patient12and identify one or more biomarkers from the physiological signal and for patient12. In another example, programmer14or IMD16may be configured to receive biomarker information from an external device, such as from external device34. In other examples, programmer14may be a larger workstation or a separate application within another multi-function device, rather than a dedicated computing device. For example, the multi-function device may be a notebook computer, tablet computer, workstation, cellular phone, personal digital assistant or another computing device that may run an application that enables the computing device to operate as a programmer14. A wireless adapter coupled to the computing device may enable communication between the computing device and IMD16. When programmer14is configured for use by the clinician, programmer14may be used to transmit initial programming information to IMD16. This initial information may include hardware information, such as the type of leads20, the arrangement of electrodes24,26on leads20, the position of leads20within brain28, programs defining electrical stimulation parameter values, and any other information that may be useful for programming into IMD16. The clinician may also store programs within IMD16with the aid of programmer14. During a programming session, the clinician may determine one or more electrical stimulation parameters that may provide efficacious therapy to patient12to address patient symptoms. For example, the clinician may select one or more electrode combinations to which entrainment stimulation pulses are delivered and/or one or more electrode combinations to which desynchronization stimulation pulse(s) are delivered. During the programming session, patient12may provide feedback to the clinician as to the efficacy of the specific electrical stimulation being evaluated. In other examples, the clinician may evaluate the efficacy based on one or more physiological parameters of patient12, heart rate, respiratory rate, muscle activity, perfusion indices, LFP signals, EEG signals, ECoG signals, etc. Programmer14may also provide an indication to patient12when electrical stimulation is being delivered, such as when entrainment stimulation pulses are being delivered and stimulation parameters corresponding to the entrainment stimulation pulses, when desynchronization stimulation pulse(s) are being delivered and stimulation parameters corresponding to the desynchronization stimulation pulse(s), and when neither are being delivered, such as during a rest phase. In some examples, programmer14may include an ability to toggle the electrical stimulation from a closed loop configuration to an open loop configuration, and vice versa. Whether programmer14is configured for clinician or patient use, programmer14is configured to communicate to IMD16and, optionally, another device, via wireless communication. Programmer14, for example, may communicate via wireless communication with IMD16. Programmer14may also communicate with another programmer or external device via a wired or wireless connection using any of a variety of local wireless communication techniques, such as RF communication according to the 802.11 or Bluetooth® specification sets, infrared (IR) communication according to the IRDA specification set, or other standard or proprietary telemetry protocols. Programmer14may also communicate with other programming devices or external device34via exchange of removable media, such as magnetic or optical disks, memory cards or memory sticks. Further, programmer14may communicate with IMD16and another programmer via remote telemetry techniques known in the art, communicating via a local area network (LAN), wide area network (WAN), public switched telephone network (PSTN), or cellular telephone network, for example. Therapy system10may be implemented to provide chronic stimulation therapy to patient12over the course of several months or years. However, system10may also be employed on a trial basis to evaluate therapy before committing to full implantation. If implemented temporarily, some components of system10may not be implanted within patient12. For example, patient12may be fitted with an external medical device, such as a trial stimulator, rather than IMD16. The external medical device may be coupled to percutaneous leads or to implanted leads via a percutaneous extension. If the trial stimulator indicates DBS system10provides effective treatment to patient12, the clinician may implant a chronic stimulator within patient12for relatively long-term treatment. In accordance with the techniques of the disclosure, IMD16senses, via electrodes24,26disposed along leads20, one or more bioelectrical brain signals of brain28of patient12. In some examples, IMD16senses one or more oscillations of the bioelectrical brain signals oscillating at a frequency associated with a pathological disease. In some examples, the one or more oscillations are within a beta frequency range of about 11 Hertz to about 35 Hertz. In other examples, the one or more oscillations are within a Theta frequency band of about 4 Hertz to about 12 Hertz. In other examples, the one or more oscillations are within a gamma frequency band of between about 35 Hertz to about 200 Hertz. In some examples, the one or more oscillations are associated with one or more symptoms of Parkinson's disease, such as tremor, rigidity, or bradykinesia, etc. In some examples, the one or more oscillations are associated with one or more symptoms of another disease, such as dystonia, essential tremor, Tourette's syndrome, obsessive compulsive disorder, epilepsy, or depression. In some examples, IMD16may perform a stimulation program that aims to initially prime a large neuronal circuit and then deliver local disruptive, therapeutic stimulation or interference. For example, IMD16may deliver an initial priming stimulation at frequencies, pulse width and amplitudes configured to entrain brain regions and recruit a network of the brain. For example, IMD16may deliver an initial priming stimulation configured to entrain the STN region of brain28. In such examples, IMD16may recruit a brain network, such as the basal ganglia brain network. IMD16may then deliver desynchronization pulse(s) (e.g., therapeutic pulse, desynchronization phase pulse train) at a different combination of frequency, amplitude, or pulse width, to recruit a small, more local neuronal volume in order to target the most effective volume with directionality. For example, segmented electrode leads20may be used to provide directionality for electrical field generation. In some examples, in order for the desynchronization stimulation pulse(s) to cause destructive interference with the entrained electrical activity (e.g., entrained oscillations of the bioelectrical brain signals of brain28), the desynchronization stimulation pulse(s) may be out of phase from the one or more entrained oscillations by a phase amount greater than 120 degrees and less than 240 degrees (e.g., such as about 180 degrees), or by a phase amount greater than 2π/3 radians and less than 4π/3 radians (e.g., such as about π radians). Further, it is noted that delivering the desynchronization stimulation pulse(s) in phase with the one or more oscillations of the bioelectrical brain signals of brain28(e.g., a phase amount in a range from about 0 degrees to 120 degrees, a range from about 240 degrees to about 360 degrees, a range from about 0 radians to about 2π/3 radians, or a range of about 4π/3 radians to 2π radians) may cause constructive interference with the one or more oscillations and may be avoided, if the goal is to suppress pathologic oscillations, or may be favored if the goal is to promote desirable oscillations. In any case, constructive interference may, in some instances, constitute a form of disruption of entrained electrical activity. In some examples, IMD10delivers electrical stimulation therapy comprising desynchronization pulse(s) selected based on biomarkers of patient12. However, in other examples, instead of electrical stimulation, IMD10may deliver other types of therapy. For example, IMD10may deliver light pulses (e.g., optogenetic therapy) comprising a frequency selected based on one or more biomarkers of patient12. In still further examples, IMD10may deliver ultrasound waves comprising a frequency selected based on one or more biomarkers of patient12. In any case, the desynchronization pulse(s) may be configured to disrupt, or interfere with, at least a portion of the entrained electrical activity of patient12. FIG.2is a functional block diagram illustrating components of an example IMD16. In the example shown inFIG.2, IMD16includes memory42, processing circuitry40, stimulation generator44, sensing circuitry46, switch circuitry48, telemetry circuitry50, and power source52. Stimulation generator44and processing circuitry40may be contained within housing17, along with the other circuitry and modules shown in the example ofFIG.2. Processing circuitry40may include any one or more microprocessors, controllers, digital signal processors (DSPs), application specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), discrete logic circuitry, or other processing circuitry. The functions attributed to processors described herein, including processing circuitry40, may be provided by a hardware device and embodied as software, firmware, hardware, or any combination thereof. In the example shown inFIG.2, sensing circuitry46may be configured to sense bioelectrical brain signals of patient12via select combinations of electrodes24,26. Sensing circuitry46may include circuitry that measures the electrical activity of a particular region, e.g., an anterior nucleus, thalamus or cortex of brain28via select electrodes24,26. For treatment of Parkinson's disease, sensing circuitry46may be configured to measure the electrical activity of the subthalamic nucleus (STN), globus pallidus interna (GPi), globus pallidus externa (GPe), and/or other areas of the basal ganglia. For treatment of epilepsy, sensing circuitry46may be configured to measure the electrical activity of the one or more sites within the Circuit of Papez, including, e.g., anterior nucleus, internal capsule, cingulate, entorhinal cortex, hippocampus, fornix, mammillary bodies, or MMT. Sensing circuitry46may sample the physiological signals substantially continuously or at regular intervals, such as, but not limited to, a frequency of about 1 Hertz to about 1000 Hertz, such as about 250 Hertz to about 1000 Hertz or about 500 Hertz to about 1000 Hertz. Sensing circuitry46includes circuitry for determining a voltage difference between two electrodes24,26, which generally indicates the electrical activity within the particular region of brain28. One of the electrodes24,26may act as a reference electrode, and, if sensing circuitry46is implanted within patient12, housing17of IMD16or the sensing circuitry in examples in which sensing circuitry46is separate from IMD16, may include one or more electrodes that may be used to sense physiological signals. The output of sensing circuitry46may be received by processing circuitry40. In some cases, processing circuitry40may apply additional processing to the physiological signals, e.g., convert the output to digital values for processing and/or amplify the physiological signals. In addition, in some examples, sensing circuitry46or processing circuitry40may filter the signal from the selected electrodes24,26in order to remove undesirable artifacts from the signal, such as noise from cardiac signals generated within the body of patient12. Although sensing circuitry46is incorporated into a common outer housing17with stimulation generator44and processing circuitry40inFIG.2, in other examples, sensing circuitry46is in a separate housing and communicates with processing circuitry40via wired or wireless communication techniques. In some examples, physiological signals may be sensed via external electrodes (e.g., scalp electrodes). In some examples, sensing circuitry46may include circuitry to tune to and extract a power level of a particular frequency band of a sensed physiological signal. Thus, the power level of a particular frequency band of a sensed physiological signal may be extracted prior to digitization of the signal by processing circuitry40. By tuning to and extracting the power level of a particular frequency band before the signal is digitized, it may be possible to run frequency domain analysis algorithms at a relatively slower rate compared to systems that do not include a circuit to extract a power level of a particular frequency band of a sensed physiological signal prior to digitization of the signal. In some examples, sensing circuitry46may include more than one channel to monitor simultaneous activity in different frequency bands, i.e., to extract the power level of more than one frequency band of a sensed physiological signal. These frequency bands may include an alpha frequency band (e.g., 8 Hertz to 12 Hertz), beta frequency band (e.g., approximately 12 Hertz to approximately 35 Hertz), gamma frequency band (e.g., between approximately 35 Hertz to approximately 200 Hertz), or other frequency bands. In some examples, sensing circuitry46may include an architecture that merges chopper-stabilization with heterodyne signal processing to support a low-noise amplifier. In some examples, sensing circuitry46may include a frequency selective signal monitor that includes a chopper-stabilized superheterodyne instrumentation amplifier and a signal analysis unit. Example amplifiers that may be included in the frequency selective signal monitor are described in further detail in commonly-assigned U.S. Patent Publication No. 2009/0082691 to Denison et al., entitled, “FREQUENCY SELECTIVE MONITORING OF PHYSIOLOGICAL SIGNALS” and filed on Sep. 25, 2008. U.S. Patent Publication No. 2009/0082691 to Denison et al., incorporated herein by reference in its entirety. As described in U.S. Patent Publication No. 2009/0082691 to Denison et al., frequency selective signal monitor may utilize a heterodyning, chopper-stabilized amplifier architecture to convert a selected frequency band of a physiological signal to a baseband for analysis. The physiological signal may include a bioelectrical brain signal, which may be analyzed in one or more selected frequency bands to detect physiological signals oscillating at a pathological frequency and, in response, processing circuitry40may deliver electrical stimulation to entrain and disrupt the entrained electrical activity in accordance with some of the techniques described herein. In some examples, sensing circuitry46may sense brain signals substantially at the same time that IMD16delivers therapy to patient12. In other examples, sensing circuitry46may sense brain signals and IMD16may deliver electrical stimulation at different times. In some examples, sensing circuitry46may monitor additional physiological signals. Suitable patient physiological signals may include, but are not limited to, muscle tone (e.g., as sensed via electromyography (EMG)), eye movement (e.g., as sensed via electrooculography (EOG) or EEG), and body temperature. In some examples, patient movement may be monitored via actigraphy. In one example, processing circuitry40may monitor an EMG signal reflective of the muscle tone of patient12to identify physical movement of the patient as a biomarker. In some examples, processing circuitry40may monitor the physical movement of a patient via one or more motion sensors that are included in IMD16and/or external to IMD16and transmit information to IMD16via telemetry circuitry50. In some examples, sensing circuitry46may monitor biomarkers that are indicative of symptoms of a disease, such as Parkinson's disease or epilepsy. For examples, sensing circuitry46may monitor one or more parameters indicative of muscle stiffness or movement (slow movement, tremor, and lack of movement) with may correspond to one or more symptoms of Parkinson's disease. Such parameters may be detected by EMG signals, actigraphy, accelerometers signals, and/or other suitable signal. In some examples, in response to the detection of one or more symptoms of Parkinson's disease based on the monitoring of such parameter(s), processing circuitry40may control stimulation generator44to generate electrical stimulation selected to entrain brain signals to oscillate at a particular frequency, and then adjust the frequency to disrupt portions of the entrained electrical activity that were or are oscillating at the particular entrainment. Memory42may include any volatile or non-volatile media, such as a random access memory (RAM), read only memory (ROM), non-volatile RAM (NVRAM), electrically erasable programmable ROM (EEPROM), flash memory, and the like. Memory42may store computer-readable instructions that, when executed by processing circuitry40, cause IMD16to perform various functions described herein. Memory42may be considered, in some examples, a non-transitory computer-readable storage medium comprising instructions that cause one or more processors, such as, e.g., processing circuitry40, to implement one or more of the example techniques described in this disclosure. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted to mean that memory42is non-movable. As one example, memory42may be removed from IMD16, and moved to another device. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in RAM). In the example shown inFIG.2, the set of electrodes24of lead20A includes four electrodes, and the set of electrodes26of lead20B includes four electrodes. Processing circuitry40controls switch circuitry48to sense physiological signals with selected combinations of electrodes24,26. In particular, switch circuitry48may create or cut off electrical connections between sensing circuitry46and selected electrodes24,26in order to selectively sense physiological signals, e.g., in particular portions of brain28of patient12. Processing circuitry40may also control switch circuitry48to apply stimulation signals generated by stimulation generator44to selected combinations of electrodes24,26. For example, processing circuitry40may control stimulation generator44to generate stimulation pulses according to stimulation parameters. Processing circuitry40may then cause stimulation generator44to deliver the stimulation pulses to one or more electrodes24,26. In particular, switch circuitry48may couple stimulation signals to selected conductors within leads20, which, in turn, deliver the stimulation signals across selected electrodes24,26. Switch circuitry48may be a switch array, switch matrix, multiplexer, or any other type of switching circuitry configured to selectively couple stimulation energy to selected electrodes22A,22B and to selectively sense bioelectrical brain signals with selected electrodes24,26. Hence, stimulation generator44is coupled to electrodes24,26via switch circuitry48and conductors within leads20. In some examples, however, IMD16does not include switch circuitry48. In some examples, IMD16may include separate current sources and sinks for each individual electrode such that switch circuitry48may not be used. Stimulation generator44may be a single channel or multi-channel stimulation generator. For example, stimulation generator44may be capable of delivering, a single stimulation pulse, multiple stimulation pulses, or a continuous signal at a given time via a single electrode combination or multiple stimulation pulses at a given time via multiple electrode combinations. In some examples, however, stimulation generator44and switch circuitry48may be configured to deliver multiple channels on a time-interleaved basis. For example, switch circuitry48may serve to time divide the output of stimulation generator44across different electrode combinations at different times to deliver multiple programs or channels of stimulation energy (e.g., entrainment stimulation pulses and desynchronization stimulation pulse(s)). Telemetry circuitry50may support wireless communication between IMD16and programmer14or an external device34under the control of processing circuitry40. In some instances, one of external device34may include an external data server (e.g., a remote server). For example, processing circuitry40of IMD16may transmit physiological signals, biomarkers, seizure probability metrics for particular sleep stages, a seizure probability profile for patient12, etc. via telemetry circuitry50to telemetry circuitry within programmer14or external device34. Accordingly, telemetry circuitry50may send information to programmer14on a continuous basis, at periodic intervals, or upon request from IMD16or programmer14. Power source52delivers operating power to various components of IMD16. Power source52may include a rechargeable or non-rechargeable battery and in some cases, a power generation circuit. In some examples, power requirements may be small enough to allow IMD16to utilize patient motion and implement a kinetic energy-scavenging device to trickle charge a rechargeable battery. In accordance with one or more examples of the disclosure, processing circuitry40and/or processing circuitry of another device (e.g., processing circuitry of programmer14) may control sensing circuitry46to sense, via electrodes24,26, one or more oscillations of a physiological signal (e.g., a brain signal) associated with a pathological disease of patient12. In some examples, the one or more oscillations are within a beta frequency range of about 11 Hertz to about 35 Hertz. In other examples, the one or more oscillations are within a Theta frequency band of about 4 Hertz to about 12 Hertz. In some examples, the one or more oscillations are associated with one or more symptoms of Parkinson's disease, such as tremor, rigidity, or bradykinesia, etc. In some examples, the one or more oscillations are associated with one or more symptoms of another disease, such as dystonia, essential tremor, Tourette's syndrome, obsessive compulsive disorder, epilepsy, or depression. In some examples, processing circuitry40may determine a first set of stimulation parameters that define entrainment stimulation pulses configured to entrain electrical activity in brain28of patient12. For example, processing circuitry40may receive stimulation parameters from another device via telemetry circuitry50and determine the stimulation parameters are to serve as the first plurality of stimulation parameters. In another example, processing circuitry40may perform various algorithms to determine the stimulation parameters to use to define the entrainment stimulation pulses. As such, processing circuitry40may cause stimulation generator44to deliver the entrainment stimulation pulses to at least one of electrodes24,26according to the first plurality of stimulation parameters. In addition, processing circuitry40may determine a second set of stimulation parameters that define at least one desynchronization stimulation pulse configured to disrupt at least a portion of electrical activity entrained by the entrainment stimulation pulses. In such instances, processing circuitry may then cause stimulation generator44to deliver the at least one desynchronization stimulation pulse according to the second plurality of stimulation parameters. In some examples, processing circuitry40may cause the entrainment stimulation pulses to cease during delivery of the desynchronization pulses. In another example, processing circuitry40may cause the delivery of desynchronization pulses while the entrainment stimulation pulses are still being delivered (e.g., in parallel). In some examples, in response to sensing the one or more oscillations of the physiological signals of patient12, processing circuitry40and/or processing circuitry of another device (e.g., processing circuitry of external programmer14) may determine the stimulation parameters for either the entrainment stimulation pulses, the desynchronization pulses, or both based on identified biomarkers. In one example, one or more external sensors may transmit physiological signals, via telemetry circuitry, to IMD16, programmer14, etc. For example, external sensors may be worn by patient12or may be external sensors that are otherwise configured to obtain physiological data from patient12. For example, external sensors may sense movement information of patient12and transmit such information via telemetry circuitry. Processing circuitry40of IMD16may then utilize the physiological signal information in order to identify one or more biomarkers of patient12. Processing circuitry40may use the identified biomarker information in order to determine patient-tailored stimulation parameters for the entrainment stimulation pulses, the desynchronization pulses, or both. In an example involving Parkinson's disease treatment, processing circuitry40may determine parameters of the desynchronization pulses so as to cause a decrease in the disease biomarker. As discussed herein, processing circuitry40may adjust these parameters over time depending on a status of the one or more disease biomarker over time (e.g., prevalent biomarker, decreasing biomarker, etc.) as may be indicated by physiological signal information. It will be understood that processing circuitry40may use such signals at the outset to determine initial stimulation parameters, as well as using such signals as feedback signals to further tailor the stimulation parameters over time. FIG.3is a conceptual block diagram of an example external medical device programmer14, which includes processing circuitry60, memory62, telemetry circuitry64, user interface66, and power source68. Processing circuitry60controls user interface66and telemetry circuitry64, and stores information and instructions to memory62and retrieves information and instructions from memory62. Programmer14may be configured for use as a clinician programmer or a patient programmer. Processing circuitry60may comprise any combination of one or more processors including one or more microprocessors, DSPs, ASICs, FPGAs, or other equivalent integrated or discrete logic circuitry. Accordingly, processing circuitry60may include any suitable structure, whether in hardware, software, firmware, or any combination thereof, to perform the functions ascribed herein to processing circuitry60. A user, such as a clinician or patient12, may interact with programmer14through user interface66. User interface66includes a display (not shown), such as a LCD or LED display, touch screen, or other type of screen, to present information related to electrical stimulation, such as for a neuromodulation system or other type of electrical stimulation system. User interface66may also include an input mechanism to receive input, such as touch input, from the user. The input mechanisms may include, for example, buttons, a keypad (e.g., an alphanumeric keypad), a peripheral pointing device or another input mechanism that allows the user to navigate through user interfaces presented by processing circuitry60of programmer14and provide user input. Memory62may include instructions for operating user interface66and telemetry circuitry64, and for managing power source68. Memory62may also store any therapy data received from IMD16, such as biomarker information, physiological parameters (e.g., EMG signals, brain signals, etc.), etc. Memory62may further store stimulation parameters received from IMD16or delivered to IMD16, such as during the course of therapy or electrical stimulation modulation. Memory62may include any volatile or nonvolatile memory, such as RAM, ROM, EEPROM or flash memory. In some examples, memory62may also include a removable memory portion. Memory62may be considered, in some examples, a non-transitory computer-readable storage medium comprising instructions that cause one or more processors, such as, e.g., processing circuitry60, to implement one or more of the example techniques described in this disclosure. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted to mean that memory62is non-movable. As one example, memory62may be removed from programmer14, and moved to another device. In certain examples, a non-transitory storage medium may store data that can change (e.g., in RAM). Wireless telemetry in programmer14may be accomplished by RF communication or proximal inductive interaction of external programmer14with IMD16. This wireless communication is possible through the use of telemetry circuitry64. Telemetry circuitry64may be similar to the telemetry circuitry of IMD16. In some examples, programmer14may be configured to communicate through a wired connection. In this manner, other external devices, such as IMD16, may be configured to communicate with programmer14through a wired and/or wireless connection. Power source68may deliver operating power to the components of programmer14. Power source68may include a battery and in some instances, a power generation circuit. In some examples, power source68may be rechargeable. In some examples, a user, such as one or more of a clinician or patient12, may access and configure IMD16via user interface66of programmer14. For example, a clinician may program, via user interface66of programmer14, one or more electrical stimulation parameters that define entrainment stimulation pulses and/or that define desynchronization stimulation pulse(s). Programmer14may deliver, via telemetry circuitry64, the programmed electrical stimulation parameters to IMD16. Further, the clinician may adjust the one or more electrical stimulation parameters of electrical stimulation delivered by IMD16. In some examples, programmer14or IMD16may automatically adjust the electrical stimulation parameters. In one example, processing circuitry60of programmer14may alter, based on feedback received from IMD16, the amount of time defined for one or more desynchronization stimulation pulse(s), the number of desynchronization stimulation pulse(s) scheduled for delivery electrodes24,26, a frequency and/or amplitude of the desynchronization stimulation pulse(s), a duty cycle of the desynchronization stimulation pulse(s), or one or more other stimulation parameters. Processing circuitry60may then deliver the one or more altered stimulation parameters to IMD16for execution. In some examples, processing circuitry60may receive user input that the electrical stimulation therapy is to operate in either an open loop configuration, a closed loop configuration, or an open loop configuration that progresses to a closed loop configuration. For example, a user may select via user interface66an option to implement any one of these configurations for delivery of electrical stimulation to patient12. FIG.4is a flow diagram illustrating an example operation for delivering electrical stimulation to the brain of a patient. For ease of illustration the example ofFIG.4is described with reference to therapy system10ofFIG.1. However, the techniques of this disclosure are not so limited, and may be employed in other suitable systems or devices configured to deliver electrical stimulation to one or more regions of patient12, including spinal cord regions, muscle tissue regions, etc. In addition, while described with reference to processing circuitry40of IMD16as performing the techniques ofFIG.4, the techniques of this disclosure are not so limited, and in some instances, the techniques of this disclosure may be performed by processing circuitry of another suitable device, such as processing circuitry60of programmer14or processing circuitry of a remote server. For example, processing circuitry60may transmit electrical stimulation parameters (e.g., amplitudes, durations, electrode polarity, etc.) to IMD16via telemetry circuitry64. Likewise, processing circuitry60may transmit information to programmer14via telemetry circuitry50, such as by communicating information relating to biomarkers, feedback signals, stimulation parameters, pathology information, physiological signals, etc. In any event, a person skilled in the art will understand that various examples are used for illustration purposes and that other implementation examples may be achieved within the scope of this disclosure. In one example, processing circuitry, such as that of IMD16or programmer14, may determine a first set of stimulation parameters that define entrainment stimulation pulses (402). For example, processing circuitry40may determine a first set of stimulation parameters that define entrainment stimulation pulses (e.g., an entrainment stimulation pulse train). The entrainment stimulation may be configured to entrain electrical activity in patient12(e.g., in the brain of patient12). The stimulation parameters may include a frequency parameter, a pulse width parameter, a voltage amplitude parameter, current amplitude parameter, a duration parameter, etc. For example, processing circuitry40may determine a length of time for delivering a first entrainment stimulation pulse train to one or more electrode(s), such as to at least one of electrode(s)24,26. In some examples, the first set of stimulation parameters that define the entrainment stimulation pulses include frequency ranges that are less than or equal to 100 Hertz. For example, the entrainment stimulation pulses may be delivered at a frequency of 5 Hertz to 80 Hertz. The stimulation pulses may be biphasic. In addition, stimulation parameters that define the entrainment stimulation pulses may include pulse width ranges. For example, the pulse width ranges for the entrainment stimulation pulses may include a pulse width of 30 microseconds to 300 microseconds. In addition, stimulation parameters that define the entrainment stimulation pulses may include amplitude ranges. For example, the amplitude ranges for the entrainment stimulation pulses may include an amplitude of 0.1 to 10 Volts or 0.1 to 25 milliamps. For example, the amplitude ranges for the entrainment stimulation pulses may include an amplitude of 3.5 Volts. In one example, the entrainment stimulation pulses may include a frequency selected from a range of about 2 Hertz to about 150 Hertz, and a pulse width selected from about 30 microseconds to about 300 microseconds. In one example of a current-controlled system, the entrainment stimulation pulses may include a current amplitude selected from about 0.2 milliamps to about 10 milliamps and a pulse width selected from about 30 microseconds to about 300 microseconds. In another example, the entrainment stimulation pulses may include a current amplitude selected from about 0.1 milliamps to about 25 milliamps and a pulse width selected from about 30 microseconds to about 300 microseconds. In such examples, processing circuitry40may deliver the entrainment stimulation pulses at a frequency selected from a range of about 2 Hertz (e.g., ±1 Hertz) to about 150 Hertz (e.g., ±10 Hertz). In some examples, processing circuitry40may control stimulation generator44to generate the entrainment stimulation pulses (404). For example, processing circuitry40may control stimulation generator44to generate the entrainment stimulation pulses according to the first set of stimulation parameters determined to define the entrainment stimulation pulses. In some examples, processing circuitry40may cause stimulation generator44to deliver the entrainment stimulation pulses to at least one electrode, such as one of electrode(s)24,26or combination of electrode(s)24,26. In some examples, the stimulation parameters that define the entrainment stimulation pulses may be configured to both entrain electrical activity and cause destructive interference within one or more regions of brain28of patient12. That is, the entrainment stimulation pulses may also provide some degree of therapy to patient12. In some examples, processing circuitry40may determine a second set of stimulation parameters for one or more desynchronization stimulation pulse(s) (e.g., a desynchronization stimulation pulse train) (406). In some instances, the stimulation parameters that define the desynchronization stimulation pulse(s) may be patient tailored. In any case, the desynchronization stimulation pulse(s) may be configured to disrupt at least a portion of electrical activity of the brain entrained by the entrainment stimulation pulses. In some examples, the desynchronization stimulation pulse(s) may cause neurons or cells to transmit electrical signals at irregular intervals relative to regularity inherent in entrained electrical activity or otherwise, out of synchrony with the entrained electrical activity. In some examples, the second set of stimulation parameters defining the desynchronization stimulation pulse(s) may be different from the first set of stimulation parameters defining the entrainment stimulation pulses. For example, the first set of stimulation parameters that define the entrainment stimulation pulses may include a first pulse frequency below approximately 100 Hertz (e.g., 100 Hertz±5 Hz). The second set of stimulation parameters that define the desynchronization stimulation pulse(s) may include a second pulse frequency of above or equal to approximately 100 Hertz. For example, the desynchronization stimulation pulses may be generated at a second pulse frequency between approximately 30 Hertz and 125 Hertz higher than the first pulse frequency. In some examples, the desynchronization stimulation pulse(s) may include a frequency selected from a range of about 2 Hertz (e.g., ±1.9 Hertz) to about 200 Hertz (e.g., ±5 Hertz) or from a range of about 100 Hertz (e.g., ±5 Hertz) to about 200 Hertz (e.g., ±5 Hertz). The stimulation pulses may be biphasic. In addition, stimulation parameters that define the desynchronization stimulation pulse(s) may include pulse width ranges. In one non-limiting example, the pulse width that defines a set of desynchronization stimulation pulse(s) may include a pulse width selected from a range of between 20 microseconds (μs) and 450 μs. For example, the pulse width that defines a set of desynchronization stimulation pulse(s) may include a pulse width selected from a range of between 20 μs and 60 μs, between 20 μs and 90 μs, between 20 μs and 120 μs, between 60 μs and 90 μs, between 60 μs and 120 μs, between 60 μs and 450 μs, between 90 μs and 120 μs, between 90 μs and 450 μs, or between 120 μs and 450 μs. In some examples, the pulse width ranges may be selected from ranges that include pulse widths that are less than 20 μs or are greater than 450 μs. In one example, a pulse width that defines a set of desynchronization stimulation pulse(s) may be 75 μs or 85 μs, and may have been selected from one or more of the example ranges above, such as from the range of between 20 μs and 450 μs, between 70 μs and 90 μs, or between 60 μs and 120 μs, etc. In addition, stimulation parameters that define the desynchronization stimulation pulse(s) may include amplitude ranges. For example, the amplitude ranges for the desynchronization stimulation pulse(s) may include an amplitude of 0.1 to 10 Volts or 0.1 to 25 milliAmps. In one example, the desynchronization stimulation pulse(s) may include a frequency selected from a range of about 2 Hertz to about 200 Hertz, and a pulse width selected from about 30 microseconds to about 300 microseconds. In one example of a current-controlled system, the desynchronization stimulation pulse(s) may include a current amplitude selected from a range of about 0.2 milliamps (e.g., ±0.1 milliamps) to about 10 milliamps (e.g., ±3 milliamps). Subsequent to generating the entrainment stimulation pulses, processing circuitry40may control stimulation generator44to generate the at least one desynchronization stimulation pulse according to the second set of stimulation parameters (408). In one example, processing circuitry40may cause stimulation generator44to deliver the desynchronization stimulation pulse(s) to the same electrode or combination of electrodes used for the entrainment stimulation pulses. In some examples, processing circuitry40may cause stimulation generator44to deliver the at least one desynchronization stimulation pulse to a different one of electrode(s)24,26or combination of electrode(s)24,26relative to the electrode(s) used for the entrainment stimulation pulses. In some examples, processing circuitry40may transition between the entrainment stimulation pulses and the desynchronization stimulation pulse(s) by varying a duty cycle, pulse width, frequency, and/or amplitude of stimulation generator44. For example, processing circuitry40may transition between the entrainment pulses and the desynchronization stimulation pulse(s) by varying a duty cycle (e.g., ratio of ‘on’ time to ‘off’ time) of stimulation generator44. In such examples, processing circuitry40may vary the duty cycle for a digital signal (e.g., the described pulses), rather than an analog waveform. In another example, processing circuitry40may vary an analog waveform when transitioning between the entrainment pulses and the at least one desynchronization stimulation pulse. In either case, the second set of stimulation parameters defining the desynchronization stimulation pulse(s) may include at least one parameter that is varied from at least one corresponding parameter included with the first set of stimulation parameters defining the entrainment stimulation pulses. For example, the varied parameter may include one or more of a varied amplitude, pulse width, frequency, and/or in some cases, a varied duty cycle. In an illustrative example, the second set of stimulation parameters may include a second frequency that is varied or different from a first frequency included with a different set of stimulation parameters defining a different phase or set of stimulation pulses. In examples where the entrainment stimulation pulses continue during the desynchronization phase, processing circuitry40may control stimulation generator44to generate the desynchronization stimulation pulse(s) while simultaneously controlling stimulation generator44to generate the desynchronization stimulation pulse(s), without causing stimulation generator44to transition between the entrainment stimulation pulses and the desynchronization stimulation pulse(s). In an illustrative example, processing circuitry40may cause the stimulation generator to deliver the entrainment stimulation pulses to a first one of electrodes24,26, then cause the stimulation generator to deliver the at least one desynchronization stimulation pulse to a second one of electrodes24,26. In this way, stimulation generator44via processing circuitry40may target a smaller volume within a larger volume entrained by the entrainment stimulation pulses. In some examples, the first set of stimulation parameters that define the entrainment stimulation pulses may be configured to entrain electrical activity of a first region of brain28. As such, the second set of stimulation parameters that define the desynchronization pulse(s) may be configured to cause destructive interference for at least a portion of the entrained electrical activity within a second region of brain28. The second region of brain28may be smaller than the first region of brain28. In addition, the second region of brain28may be found within at least a portion of the first region of brain28. In another example, processing circuitry40may deliver electrical stimulation based on feedback received regarding the entrained activity. For example, processing circuitry40may cause stimulation generator44to deliver the entrainment stimulation pulses to a first one of electrodes24,26. Processing circuitry40may then receive indication that the entrainment stimulation pulses have resulted in electrical activity being entrained in patient12. Responsive to determining that the entrainment stimulation pulses have resulted in electrical activity being entrained in patient12, processing circuitry40may cause stimulation generator44to deliver the at least one desynchronization stimulation pulse to a second one of electrodes24,26. In addition, processing circuitry40may continue to alternate between the entrainment stimulation pulse phase and desynchronization pulse phase. In some examples, processing circuitry40may adjust, automatically or otherwise, parameters of either pulse phase as time progresses. While described with reference to processing circuitry40of IMD16, the techniques of this disclosure are not so limited, and the techniques of this disclosure may be implemented by other processing circuitry (alone or in combination with processing circuitry of another device), such as processing circuitry60of programmer14or processing circuitry of external device34or an external server. For example, processing circuitry60may determine stimulation parameters and transmit the stimulation parameters to IMD16, thereby causing IMD16(e.g., stimulation generator44) to deliver the electrical stimulation pulses. In addition, it will be understood that some techniques ofFIG.4may be combined or omitted altogether. For example, processing circuitry, such as that of IMD16, may determine stimulation parameters to define the entrainment stimulation pulses and the desynchronization stimulation pulse(s) in parallel, before or during the delivery of electrical stimulation to patient12. FIG.5is a flow diagram illustrating an example operation for delivering electrical stimulation to the brain of a patient by utilizing biomarkers of the patient. Biomarkers may be used to determine stimulation parameters for either entrainment stimulation pulses, desynchronization stimulation pulse(s), other phases of the electrical stimulation (e.g., a rest phase duration). In an example where biomarkers are used to determine stimulation parameters that define desynchronization stimulation pulse(s), processing circuitry40may control stimulation generator44to generate entrainment stimulation pulses (with or without biomarker tailoring). In such examples, processing circuitry40may obtain a physiological signal from a patient (502). For example, the physiological signal may be a brain wave signal. In some examples, the physiological signal may be a tremor signal, such as a signal corresponding to movement of a limb. In some examples, the physiological signal may be a local field potential (LFP) signal originating from within one or more regions of the brain. Processing circuitry40may identify one or more biomarkers from the physiological signal and for patient12(504). In some examples, processing circuitry40may be configured receive the physiological signal or, in some cases, the one or more biomarkers, from an external device34or from an external server that stores biomarkers for patient12received from other devices, such as an external device34. In some examples, external device34may be external to and distinct from housing17. For example, external device34may be a wearable device. In other examples, the external device may be external to therapy system10ofFIG.1. In some examples, external device34may be a sensing device configured to detect physiological signals from patient12. In any case, external device34may be configured to obtain a physiological signal from patient12. External device34may then communicate, via telemetry circuitry of external device34, attributes of the physiological signal to IMD16or programmer14. In another example, external device34may identify one or more biomarkers from the physiological signal and communicate, via telemetry circuitry of external device34, to IMD16or programmer14. In some instances, programmer14may communicate, via telemetry circuitry64, the physiological signal attributes, or the one or more biomarkers, to IMD16for further processing. In any case, processing circuitry40may be configured to obtain the physiological signal from patient12, either directly or indirectly, such as from external device34or programmer14. In one example, the physiological signal may be a local field potential (LFP) signal. In such examples, processing circuitry40may identify the one or more biomarkers from the LFP of patient12. As such, the one or more biomarkers may indicate a neural state of patient12(e.g., tremors). An example tremor may include physical manifestations of brain activity, such that the LFP biomarkers may indicate that a tremor in brain28of patient12is occurring when a power level within a particular frequency band, such as a beta frequency band, is high. In some examples, the biomarkers may be indicative of a neural state of patient12comprising a frequency between approximately 0.1 Hertz to 500 Hertz. In such examples, such frequencies may span the above range or in some cases, beyond the range, as high frequency oscillations (HFOs) may also indicate the neural state of patient12. It has been observed that particular LFP signals lie in the range of 0.1 Hertz to 500 Hertz. For example, low frequency (e.g., 5-10 Hertz) may indicate tremor frequencies or conditions, beta frequencies indicating Parkinson's disease rigidity, gamma frequencies correlating to dyskinesia, and so forth. In an illustrative example, processing circuitry40may receive one or more tremor signals from within one or more regions of brain28of patient12. The one or more tremor signals may be configured to indicate tremors in a local field potential (LFP) of patient12. As mentioned above, the one or more tremor signals include frequencies between 0.1 Hertz and 500 Hertz. As such, processing circuitry40may identify a biomarker between 0.1 Hertz and 500 Hertz relating to tremors in the LFP of patient12. Processing circuitry may use such a biomarker in order to determine the stimulation parameters for the one or more electrical stimulation pulses. In some examples, processing circuitry40may utilize a LFP biomarker to determine the direction and relative distance in which the greatest local synchrony occurs. Processing circuitry40then tailor the desynchronization pulse and the directionality of the desynchronization pulse to include the highly synchronized region. While described with reference to LFP tremors, the techniques of this disclosure are not so limited, and various other biomarkers may be determined, such as beta biomarkers, EEG signal biomarkers, ECoG signal biomarkers, wearable input signal biomarkers, physical tremor biomarkers, brain electrical signal biomarkers, etc. For example, processing circuitry40may determine stimulation parameters that define the desynchronization stimulation pulse(s) based at least in part on the one or more biomarkers for patient12(506). In such examples, processing circuitry40may control stimulation generator44to generate desynchronization stimulation pulse(s) (508). For example, processing circuitry40may cause stimulation generator44to deliver desynchronization stimulation pulse(s) to one of electrode(s)24,26according to the stimulation parameters determined based on the one or more biomarkers. In any case, processing circuitry40may use biomarker information to determine stimulation parameters for entrainment stimulation parameters and/or desynchronization stimulation parameters. For example, processing circuitry40may use biomarker information to determine only the desynchronization stimulation parameters, whereas the entrainment stimulation parameters may be predefined, such as by a physician or clinician, and/or retrieved from memory, such as from a database of preset or predefined stimulation parameters. While described with reference to processing circuitry40of IMD16, the techniques of this disclosure are not so limited, and the techniques of this disclosure may be implemented by other processing circuitry (alone or in combination with processing circuitry of another device), such as processing circuitry60of programmer14or processing circuitry of external device34or an external server. For example, processing circuitry60may receive and/or identify one or more biomarkers, determine one or more stimulation parameters based on the one or more biomarkers, and transmit stimulation parameters to IMD16, thereby causing IMD16(e.g., stimulation generator44) to deliver the electrical stimulation pulses. In addition, it will be understood that some techniques ofFIG.5may be combined or omitted altogether. For example, processing circuitry, such as that of IMD16, may determine stimulation parameters to define the entrainment stimulation pulses and the desynchronization stimulation pulse(s) in parallel, before or during the delivery of electrical stimulation to patient12, based on one or more biomarkers of patient12. That is, processing circuitry40may identify one or more biomarkers before or during the delivery of electrical stimulation pulses to patient12. FIG.6is a chart illustrating example modulation of electrical stimulation pulses602.FIG.6illustrates entrainment stimulation pulses604and desynchronization stimulation pulses606. Processing circuitry40may cause stimulation generator44to deliver the entrainment stimulation pulses604according to a first plurality of stimulation parameters including a first stimulation frequency. In addition, processing circuitry40may cause stimulation generator44to deliver the desynchronization stimulation pulses604according to a second plurality of stimulation parameters including a second stimulation frequency. In some examples, the first stimulation frequency is different from the second stimulation frequency. For example, the first stimulation frequency may be lower than the second stimulation frequency for the desynchronization stimulation pulses606. In some examples, the stimulation parameters may include a duration parameter or in some instances, a pulse count parameter. In such examples, processing circuitry40may determine one or more scaling factors. Processing circuitry40may apply the scaling factors to the patient-specific biomarkers to determine the stimulation parameters to define each stimulation phase. For example, processing circuitry40may determine scaling factors a, b, and c. These scaling factors may be all different scaling factors, whereas in some instances, some scaling factors may be the same. In such instances, processing circuitry40may determine stimulation parameters to define each stimulation phase using the following equations: f_1=a*f_patient (e.g., a duration component of pulses of multiple phases602), f_2=b*f_patient (e.g., a duration component of pulses604), and f_3=c*f_patient (e.g., a duration component of pulses606). In such examples, f_patient is the patient specific physiological signal, such as a signal received or retrieved from one of external devices34or programmer14. In some examples, processing circuitry40may determine that a duration parameter has been satisfied by counting the number of pulses, in conjunction with a reference to the frequency of the pulses (e.g., how many pulses per second or per millisecond). In any case, the duration parameter or pulse count parameter for the entrainment stimulation pulses604, the desynchronization stimulation pulses604, or both entrainment stimulation pulses604the desynchronization stimulation pulses604may be tailored to each particular patient12(e.g., based on physiological signals obtained from patient12, based on patient history of patient12, etc.). Processing circuitry40, for each case, may selected the duration parameter or pulse count parameter from a range having an upper limit and a lower limit. In a non-limiting example, processing circuitry40may cause the entrainment stimulation pulses604to be delivered between 1 ms and 40 ms or until processing circuitry40determines an entrainment indication, such as from a signal received from an external wearable device or other internal device. In such instances, processing circuitry40may tailor the duration or pulse count parameter to patient12such that the tailored parameters fall within an acceptable range for treatment of patient12. In such examples, processing circuitry40may be configured to increase the first stimulation frequency to reach the second stimulation frequency. For example, the first stimulation frequency may be at or below 100 Hertz. In such examples, processing circuitry40may increase the first stimulation frequency by a first amount in order to achieve a second stimulation frequency for the desynchronization stimulation pulses606. In a non-limiting example, the first amount may be between around 30 Hertz and 125 Hertz. In the example ofFIG.6, the amplitude and pulse width of entrainment stimulation pulses604are greater than the amplitude and pulse width of desynchronization stimulation pulses606. However, the amplitudes and/or pulse widths may be the same or higher for desynchronization stimulation pulses606in other examples. It will be understood that the example rising and fallings edge of a square-like pulse as inFIGS.6,7,11,12, and13are shown for illustration purposes, and that aspects of this disclosure are not so limited. For example, the pulses may be configured as saw, triangle, or other non-sinusoidal pulses or non-sinusoidal waveforms. That is, processing circuitry40may cause stimulation generator44to deliver stimulation in the form of saw pulses, triangle pulses, square pulses (e.g., rectangular pulses), etc. FIG.7is a chart illustrating example modulation of electrical stimulation pulses that repeat over time. For example, processing circuitry40may cause stimulation generator44to deliver a first set of stimulation pulses702and repeat the stimulation pulses as a repeating set of stimulation pulses704. The first set and the repeating set of stimulation pulses may include entrainment stimulation pulses706and desynchronization stimulation pulses708, similar to entrainment stimulation pulses604and desynchronization stimulation pulses606fromFIG.6. As shown, the duration and/or pulse count of different phases or between phases may not be uniform throughout a repeating set of stimulation pulses704. FIG.8is a chart illustrating an example electrical stimulation waveform modulated to include different frequencies over time. The example ofFIG.8illustrates entrainment stimulation waveform802and desynchronization stimulation waveform804being delivered in an open loop configuration (e.g., not changing over time based on efficacy of stimulation therapy). That is, processing circuitry40may be configured to perform electrical stimulation modulation in an open loop configuration by interleaving entrainment stimulation waveform802and desynchronization stimulation waveform804. Waveforms802,804may be defined by different amplitude, frequency, or other stimulation parameters. In the examples shown inFIGS.8,9,14and15, the example waveforms are shown as stimulation signals for illustration purposes. It will be understood that the stimulation signals may be generated and delivered as pulses instead, for example, as shown inFIG.6. That is, the examples shown inFIGS.8,9,14and15are intended to illustrate a change in frequency, amplitude, etc., but the actual stimulation may not necessarily be delivered as a sinusoid signal. For example, the example waveform of stimulation signals inFIG.9may translate to a stimulation pulse train similar to that shown, for example, in certain portion ofFIG.7. That is, the example waveforms may be delivered as pulses, although shown as waves, in some instances, for illustration purposes. As shown, entrainment stimulation waveform802may have a first amplitude and frequency, whereas desynchronization stimulation waveform804may have different stimulation parameters (e.g., higher frequency, lower amplitude, etc.). In some examples, including in the example waveform shown inFIG.8, entrainment stimulation waveform802and desynchronization stimulation waveform804may have the same frequency with different amplitudes. That is, entrainment stimulation waveform802may have an amplitude of A1and desynchronization stimulation waveform804may have an amplitude of A3(not shown). In such examples, entrainment stimulation waveform802and the desynchronization stimulation waveform804may share a same or substantially the same frequency, such as within 5 to 10 Hz of one another. In some examples, A3may be greater than A1. A3may, in some instances however, be less than A1. The amplitude ofFIG.8may be in units of volts or amps depending on the configuration of the electrical stimulation therapy, such as based on whether the electrical stimulation is based on electrical current (e.g., mA) or voltage (e.g., V). The time axis ofFIG.8may be in units of milliseconds. In some examples, processing circuitry40may be configured to control stimulation generator44to generate the entrainment stimulation pulses802for a predefined duration of time. In some examples, processing circuitry40may be configured to control stimulation generator44to generate the desynchronization stimulation pulse(s)804with a predetermined number of pulses (e.g., 1, 10, 100, etc.). In another example, processing circuitry40may be configured to control stimulation generator44to generate the desynchronization stimulation pulse(s)804for a predefined duration of time. The duration of time for each phase may be the different, but in some instances, processing circuitry40may determine the duration of time to be the same for two or more different phases. In such examples, responsive to generating the predetermined number of pulses of the one or more desynchronization stimulation pulses, processing circuitry40may control stimulation generator44to generate entrainment stimulation waveform806following desynchronization stimulation waveforms804. In one example, responsive to generating the one or more desynchronization stimulation pulses for the predefined duration of time, processing circuitry40may control stimulation generator44to generate entrainment stimulation waveform806following desynchronization stimulation waveforms804. Processing circuitry may control stimulation generator44to generate entrainment stimulation waveform806again for a predefined duration of time. It should be noted that the predefined duration of time for entrainment stimulation waveform806may or may not be the same duration of time as defined for entrainment stimulation waveform802. FIG.9is a chart illustrating an example electrical stimulation waveform modulated to have different frequencies over time. The example ofFIG.9is similar toFIG.8except that the amplitude of the desynchronization stimulation waveforms904is the same as the entrainment stimulation waveforms902, whereas stimulation generator44has altered the frequency. In an illustrative example, the frequency may be increased from 80 Hertz to more than 100 Hertz, such as increased to a frequency of 130 Hertz. FIG.10is a flow diagram illustrating an example operation for delivering electrical stimulation to brain28of patient12by utilizing feedback signals. As such, processing circuitry40may cause stimulation generator44to deliver stimulation pulses in a closed loop configuration. That is, processing circuitry40may trigger changes in stimulation pulse phases based on patient feedback, such as based on data received from sensing circuitry46. In some examples, processing circuitry40may receive an indication that the entrainment stimulation pulses have resulted in entrained electrical activity (1002). For example, processing circuitry40may receive an indication that delivery of the entrainment stimulation pulses has resulted in electrical activity of patient12being entrained. In one example, one or more biomarkers may indicate that a particular VOA is following an entrained waveform pattern. In some examples, processing circuitry40may control stimulation generator44to generate the desynchronization stimulation pulse(s) (1004). For example, responsive to receiving the indication that delivery of the entrainment stimulation pulses has resulted in electrical activity of patient12being entrained, processing circuitry40may control stimulation generator44to generate the set of one or more desynchronization stimulation pulse(s). Processing circuitry40may control stimulation generator44to generate the set of desynchronization stimulation pulse(s) to one or more of electrodes24,26. In some examples, processing circuitry40may receive one or more feedback signals indicating a degree to which the desynchronization stimulation pulse(s) has disrupted at least a portion of the entrained electrical activity (1006). For example, processing circuitry40may receive a feedback signal indicating a degree of efficacy of the set of desynchronization stimulation pulse(s) in disrupting the entrained electrical activity. In some examples, the feedback signals may indicate a degree of disruption by indicating a change in biomarkers characteristics. For example, the feedback signals may include signals indicating biomarker characteristics of a particular disease, such as beta in Parkinson's disease (e.g., beta activity, beta oscillations, etc.). In some examples, the feedback signals may include indications of changes in biomarker characteristics. In the illustrative example of a Parkinson's disease treatment, the feedback signals may indicate beta changes indicating improved or beta and/or longer lasting desired beta (e.g., “good” beta). Processing circuitry40may utilize such feedback signals to adjust parameters of the desynchronization phase. In a non-limiting example, processing circuitry40may shorten the desynchronization phase in response to such feedback signals indicating such improvements in patient12. In one example, processing circuitry40may receive the feedback signals from one of external devices34, such as from a wearable device. In some examples, the feedback signals may be the same signals used to determine parameters for one or both the entrainment phase and the desynchronization phase. For example, processing circuitry may determine duration parameters for the entrainment pulses and the desynchronization phases based on signals received from one of external devices34. In such examples, processing circuitry40may adjust the duration parameters as treatment progresses based on signals received from the same one of the external devices34, such as a wearable device configured to identify and track beta biomarker characteristics. It should be noted that the set of desynchronization stimulation pulses may include only one desynchronization stimulation pulse, in some instances, and likewise, the set of desynchronization stimulation pulses may include the at least one desynchronization stimulation pulse as part of the set. In some examples, processing circuitry40may adjust one or more parameters of the second set of stimulation parameters to define an adjusted set of stimulation parameters (1008). The adjusted set of stimulation parameters may define at least one adjusted desynchronization stimulation pulse. In other examples, the adjusted set of stimulation parameters may define adjusted entrainment stimulation pulses. In some examples, processing circuitry40may determine, based at least in part on the feedback signal, an adjusted plurality of stimulation parameters for adjusting the desynchronization stimulation pulse(s). For example, processing circuitry40may control stimulation generator44to increase or decrease the frequency of the desynchronization stimulation pulse(s), either in a next repeating phase or as part of the currently executing phase. Likewise, processing circuitry40may control stimulation generator44to increase or decrease the frequency of the entrainment stimulation pulses, either in a next repeating phase or as part of the currently executing phase. In some examples, processing circuitry40may control stimulation generator44to generate at least one adjusted desynchronization stimulation pulse(s) according to the adjusted set of stimulation parameters (1010). As such, processing circuitry40may control stimulation generator44to deliver adjusted desynchronization stimulation pulse(s), or adjusted entrainment stimulation pulses, according to the adjusted set of stimulation parameters. While described with reference to processing circuitry40of IMD16, the techniques of this disclosure are not so limited, and the techniques of this disclosure may be implemented by other processing circuitry (alone or in combination with processing circuitry of another device), such as processing circuitry60of programmer14or processing circuitry of external device34or an external server. For example, processing circuitry60may determine adjustments to stimulation parameters and transmit the adjusted stimulation parameters to IMD16, thereby causing IMD16(e.g., stimulation generator44) to deliver the electrical stimulation pulses. In addition, it will be understood that some techniques ofFIG.10may be combined or omitted altogether. For example, processing circuitry, such as that of IMD16, may determine adjusted stimulation parameters that define the desynchronization stimulation pulses prior to the delivery of the first set of desynchronization stimulation pulses to patient12. In such instances, the first set of desynchronization stimulation pulses may also be based on adjusted stimulation parameters. FIG.11is a chart illustrating example modulation of electrical stimulation pulses1102. In the example ofFIG.11, the electrical stimulation pulses1102include an example rest phase1104, example entrainment stimulation pulses1106,1110, and example desynchronization pulses1108. In some examples, processing circuitry40may control stimulation generator44to interleave a rest phase between generation of the entrainment stimulation pulses1110and the desynchronization stimulation pulse(s)1108, as shown. In such examples, a duration of the rest phase may be patient specific. For example, processing circuitry40may determine the duration of the rest phase using the following example equation: f_4=d*f_patient, where d is a scaling factor, f_4 is the rest phase duration or in particular instances, the rest phase frequency, and f_patient is the patient specific signal (e.g., physiological signal). In some examples, processing circuitry40may adjust a duration of the rest phase from a first rest phase duration to a second rest phase duration based at least in part on one or more biomarkers of patient12. For example, processing circuitry40may shorten the rest phase duration or lengthen the rest phase duration over time. For example, processing circuitry40may adjust the duration of the second rest phase to be shorter than the duration of the first rest phase. That is, processing circuitry40may decrease the duration of the rest phase over time, such that a subsequent rest phase has a shorter duration than a preceding rest phase duration. In one illustrative example, processing circuitry40may decrease the duration of the rest phase until the rest phase is removed (e.g., the subsequent rest phase has a duration of 0 seconds). FIG.12is a chart illustrating example modulation of electrical stimulation pulses1202and an example rest phase1204. Electrical stimulation pulses1202may include both entrainment stimulation pulses1208and desynchronization stimulation pulses1210. In the example ofFIG.12, a rest phase is interleaved between desynchronization pulses1210from stimulation pulses1202and desynchronization pulses1212from stimulation pulses1206. In some examples, a rest phase, such as rest phase1204, may only be used in the closed loop configuration. FIG.13is a chart illustrating example modulation of electrical stimulation pulses1302,1306,1310,1314and example rest phases1304,1308,1312. In such examples, inter-pulse periods of rest or transition pulses can be interleaved between delivery of entrainment stimulation pulses and the desynchronization stimulation pulse(s). The rest phases1304,1308,1312may be included in order to allow for the natural evolution of network population resynchronization to occur. Since patient symptoms may not be present before resynchronization occurs, the system may withhold stimulation during the rest phases1304,1308,1312in order to conserve power when stimulation is not necessary to treat the patient. As shown, in some examples, electrical stimulation pulses1302may include entrainment stimulation pulses1316, desynchronization stimulation pulses1318, or both, as in the example electrical stimulation pulses1310. FIG.14is a chart illustrating an example electrical stimulation waveform including rest phases1304. The electrical stimulation waveform includes entrainment stimulation pulses1302and desynchronization stimulation pulses1306in the form of waveforms. In the example ofFIG.14, the entrainment stimulation pulses1302and desynchronization stimulation pulses1306are separated by a rest phase1304. As inFIG.8or9, the amplitude ofFIG.14may be in units of V or mA depending on the configuration of the electrical stimulation therapy, and the time axis may be in units of milliseconds. In some examples, processing circuitry40may cause stimulation generator44to deliver the entrainment stimulation pulses and the at least one desynchronization stimulation pulse in an open loop configuration. In addition, processing circuitry40may progress to delivering the entrainment stimulation pulses and the desynchronization stimulation pulse(s) in a closed loop configuration involving feedback from the electrical stimulation system. In such examples, the closed loop configuration may have one or more rest phases interleaved between the entrainment stimulation pulses and the desynchronization stimulation pulse(s). Processing circuitry40may then determine a degree of synchrony obtained in a neuronal subpopulation of the brain. For example, Processing circuitry40may determine the degree of synchrony during a first rest phase. In addition, during delivery of the entrainment stimulation pulses and the at least one desynchronization stimulation pulse in the closed loop configuration, processing circuitry40may determine a duration for the second rest phase different from the first rest phase based on one or more biomarkers of patient12. For example, processing circuitry may determine the duration for the second rest phase based at least in part on the degree of network synchrony obtained from prior stimulation pulses. For example, the open loop configuration may or may not have a rest phase. The open loop configuration may then progress to form a closed loop. In the closed loop configuration, processing circuitry40may cause stimulation generator to introduce a rest phase, in cases where the open loop configuration did not have a rest phase, or in some cases, continue providing a rest phase, such as by causing stimulation generator44to cease delivering stimulation. The rest phase duration may be based on one or more biomarkers. The rest phase may increase in duration when the target neural network is in synchrony or is behaving according to an expected behavior determined based on the desynchronization and/or entrainment stimulation pulses. If the network starts misbehaving, however, then processing circuitry40may shorten the duration of the rest phase, such as by removing the rest phase altogether. FIG.15is a chart illustrating example electrical stimulation waveforms being delivered to multiple electrodes. The dashed lines indicate optional delivery of pulses. The stimulation includes entrainment stimulation pulses1502,1508, and optionally1506. The stimulation also includes rest phases1512and1510. The stimulation also includes desynchronization stimulation pulses1504. In such examples, the desynchronization stimulation pulses1504may be delivered at substantially the same time as entrainment stimulation pulses1506. As inFIG.8,9, or14, the amplitude ofFIG.15may be in units of V or mA depending on the configuration of the electrical stimulation therapy, and the time axis may be in units of milliseconds. In another example, processing circuitry40may determine a source of pathology in brain28of patient12in order to determine the stimulation parameters that define the desynchronization stimulation pulse(s). For example, processing circuitry40may receive physiological signals or biomarkers indicating the pathology source. As such, processing circuitry40may determine the source of pathology in brain28of patient12. In such examples, processing circuitry40may also identify a neuronal subpopulation of brain28of patient12relating to the source of the pathology. As such, processing circuitry40may determine the stimulation parameters that define the desynchronization stimulation pulse(s) based at least in part on the identified neuronal subpopulation relating to the source of the pathology. In particular, processing circuitry40may determine the stimulation parameters so as to target the neuronal subpopulation using the desynchronization stimulation pulse(s). In some examples, processing circuitry40may determine a change in pathology of a patient. In such examples, processing circuitry40may alter a directionality of the at least one desynchronization stimulation pulse from targeting a first subpopulation to targeting a second subpopulation based at least in part on the determined change in pathology. Regardless of how the pathological brain signals are identified, by disrupting the oscillations of the bioelectrical brain signals using desynchronization stimulation pulses within one or more pathological frequency regions, motor symptoms that manifest themselves when certain frequency oscillations are present may be reduced or substantially eliminated using the various examples of this disclosure. The techniques described in this disclosure may be implemented, at least in part, in hardware, software, firmware or any combination thereof. For example, various aspects of the described techniques may be implemented within one or more processors or processing circuitry, including one or more microprocessors, DSPs, ASICs, FPGAs, or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components. The term “processor” or “processing circuitry” may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry, or any other equivalent circuitry. A control unit including hardware may also perform one or more of the techniques of this disclosure. Such hardware, software, and firmware may be implemented within the same device or within separate devices to support the various operations and functions described in this disclosure. In addition, any of the described units, circuits or components may be implemented together or separately as discrete but interoperable logic devices. Depiction of different features as circuits or units is intended to highlight different functional aspects and does not necessarily imply that such circuits or units must be realized by separate hardware or software components. Rather, functionality associated with one or more circuits or units may be performed by separate hardware or software components or integrated within common or separate hardware or software components. The techniques described in this disclosure may also be embodied or encoded in a computer-readable medium, such as a computer-readable storage medium, containing instructions that may be described as non-transitory media. Instructions embedded or encoded in a computer-readable storage medium may cause a programmable processor, or other processor, to perform the method, e.g., when the instructions are executed. Computer readable storage media may include RAM, ROM, PROM, EPROM, EEPROM, flash memory, a hard disk, a CD-ROM, a floppy disk, a cassette, magnetic media, optical media, or other computer readable media. In addition, it should be noted that the systems described herein may not be limited to treatment of a human patient. In alternative examples, these systems may be implemented in non-human patients, e.g., primates, canines, equines, pigs, and felines. These animals may undergo clinical or research therapies that my benefit from the subject matter of this disclosure. Various examples of the disclosure have been described. These and other examples are within the scope of the following claims.
114,948
11857791
DESCRIPTION OF EXEMPLARY EMBODIMENTS Reference will now be made in detail to exemplary embodiments of the present disclosure examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. Embodiments of the present disclosure relate generally to device for modulating a nerve through the delivery of energy. Nerve modulation, or neural modulation, includes inhibition (e.g. blockage), stimulation, modification, regulation, or therapeutic alteration of activity, electrical or chemical, in the central, peripheral, or autonomic nervous system. Nerve modulation may take the form of nerve stimulation, which may include providing energy to the nerve to create a voltage change sufficient for the nerve to activate, or propagate an electrical signal of its own. Nerve modulation may also take the form of nerve inhibition, which may including providing energy to the nerve sufficient to prevent the nerve from propagating electrical signals. Nerve inhibition may be performed through the constant application of energy, and may also be performed through the application of enough energy to inhibit the function of the nerve for some time after the application. Other forms of neural modulation may modify the function of a nerve, causing a heightened or lessened degree of sensitivity. As referred to herein, modulation of a nerve may include modulation of an entire nerve and/or modulation of a portion of a nerve. For example, modulation of a motor neuron may be performed to affect only those portions of the neuron that are distal of the location to which energy is applied. In patients that suffer from a sleep breathing disorder, for example, a primary target response of nerve stimulation may include contraction of a tongue muscle (e.g., the muscle) in order to move the tongue to a position that does not block the patient's airway. In the treatment of migraine headaches, nerve inhibition may be used to reduce or eliminate the sensation of pain. In the treatment of hypertension, neural modulation may be used to increase, decrease, eliminate or otherwise modify nerve signals generated by the body to regulate blood pressure. While embodiments of the present disclosure may be disclosed for use in patients with specific conditions, the embodiments may be used in conjunction with any patient/portion of a body where nerve modulation may be desired. That is, in addition to use in patients with a sleep breathing disorder migraine headaches, or hypertension, embodiments of the present disclosure may, be used in many other areas, including, but not limited to: deep brain stimulation (e.g. treatment of epilepsy, Parkinson's and depression); cardiac pace-making, stomach muscle stimulation (e.g., treatment of obesity), back pain, incontinence, menstrual pain, and/or any other condition that may be affected by neural modulation. FIG.1illustrates an implant unit and external unit, according to an exemplary embodiment of the present disclosure. An implant unit110, may be configured for implantation in a subject, in a location that permits it to modulate a nerve115. The implant unit110may be located in a subject such that intervening tissue111exists between the implant unit110and the nerve115. Intervening tissue may include muscle tissue, connective tissue organ tissue, or any other type of biological tissue. Thus, location of implant unit110does not require contact with nerve115for effective neuromodulation. The implant unit110may also be located directly adjacent to nerve115, such that no intervening tissue111exit. In treating a sleep breathing disorder implant unit110may be located on a genioglossus muscle of a patient. Such a location is suitable for modulation of the hypoglossal nerve, branches of which run inside the genioglossus muscle. Implant unit110may also be configured for placement in other locations. For example, migraine treatment may require subcutaneous implantation in the back of the neck, near the hairline of a subject, or behind the ear of a subject, to modulate the greater occipital nerve and/or the trigeminal nerve. Treating hypertension may require the implantation of a neuromodulation implant intravascularly inside the renal artery or renal vein (to modulate the parasympathetic renal nerves), either unilaterally or bilaterally, inside the carotid artery or jugular vein (to modulate the glossopharyngeal nerve through the carotid baroreceptors). Alternatively or additionally, treating hypertension may require the implantation of a neuromodulation implant subcutaneously, behind the ear or in the neck, for example, to directly modulate the glossopharyngeal nerve. External unit120may be configured for location external to a patient, her directly contacting, or close to the skin112of the patient. External unit120may be configured to be affixed to the patient, for example, by adhering to the skin112of the patient, or through a band or other device configured to hold external unit120in place. Adherence to the skin of external unit120may occur such that it is in the vicinity of the location of implant unit110. FIG.2illustrates an exemplary embodiment of a neuromodulation system for delivering energy in a patient100with a sleep breathing disorder. The system may include an external unit120that may be configured for location external to the patients. As illustrated inFIG.2, external unit120may be configured to be affixed to the patient100.FIG.2illustrates in a patient100with a sleep breathing disorder, the external unit120may be configured for placement underneath the patient's chin and/or on the front of patient's neck. The suitability of placement locations may be determined by communication between external unit120and implant unit110, cussed in greater detail below. In alternate embodiments, for the treatment of conditions other than a sleep breathing disorder, the external unit may be configured to be affixed anywhere suitable on a patient, such as the back of a patient's neck, i.e. for communication with a migraine treatment implant unit, on the outer portion of a patient's abdomen, i.e. for communication with a stomach modulating implant unit, on a patient's back, i.e. for communication with a renal artery modulating implant unit, and/or on any other suitable external location on a patient's skin, depending on the requirements of a particular application. External unit120may further be configured to be affixed to an alternative location proximate to the patient. For example, in one embodiment, the external unit may be configured to fixedly or removably adhere to a strap or a band that may be configured to wrap around a part of a patient's body. Alternatively, or in addition, the external unit may be configured to remain in a desired location external to the patient's body without adhering to that location. The external unit120may include a housing. The housing may include any suitable container configured for retaining components. In addition, while the external unit is illustrated schematically inFIG.2, the housing may be any suitable size and/or shape and may be rigid or flexible. Non-limiting examples of housings for the external unit100include one or more of patches, buttons, or other receptacles having varying shapes and dimensions and constructed of any suitable material. In one embodiment, for example, the housing may include a flexible material such that the external unit may be configured to conform to a desired location. For example, as illustrated inFIG.2, the external unit may include a skin patch, which, in turn, may include a flexible substrate. The material of the flexible substrate may include, but is not limited to, plastic, silicone, woven natural fibers, and other suitable polymers, copolymers, and combinations thereof. Any portion of external unit120may be flexible or rigid, depending on the requirements of a particular application. As previously discussed, in some embodiments external unit120may be configured to adhere, to a desired location. Accordingly, in some embodiments, at least one side of the housing may include an adhesive material. The adhesive material may include a biocompatible material and may allow for a patient to adhere the external unit to the desired location and remove the external unit upon completion of use. The adhesive may be configured for single or multiple uses of the external unit. Suitable adhesive materials may include, but are not limited to biocompatible glues, starches, elastomers, thermoplastics, and emulsions. FIG.3schematically illustrates a system including external unit120end an implant unit110. In some embodiments, internal unit110may be configured as a unit to be implanted into the body of a patient, and external unit120may be configured to send signals to and/or receive signals from implant unit110. As shown inFIG.3, various components may be included within a housing of external unit120or otherwise associated with external unit120. As illustrated inFIG.3, at least one processor144may be associated with external unit120. For example, the at least one processor144may be located within the housing of external unit120. In alternative embodiments, the at least one processor may be configured for wired or wireless communication with the external unit from a location external to the housing. The at least one processor may include any electric circuit that may be configured to perform a logic operation on at least one input variable. The at least one processor may therefore include or more integrated circuits, microchips, microcontrollers, and microprocessors, which may be all or part of a central processing unit (CPU) a digital signal processor (DSP) a field programmable gate array (FPGA), or any other circuit known to those skilled in the art that may be suitable for executing instructions or performing logic operations. FIG.3illustrates that the external unit120may further be associated with a power source140. The power source may be removably couplable to the external unit at an exterior location relative to external unit. Alternatively, as shown inFIG.3, power source140may be permanently or removably coupled to a location within external unit120. The power source may further include any suitable source of power configured to be in electrical communication with the processor. In one embodiment, for example the power source140may include a battery. The power source may be configured to power various components within the external unit. As illustrated inFIG.3, power source140may be configured to provide power to the processor144. In addition, the power source140may be configured to provide power to a signal source142. The signal source142may be in communication with the processor144and may include any device configured to generate a signal (e.g., a sinusoidal signal, square wave, triangle wave, microwave, radio-frequency (RF) signal, or any other type of electromagnetic signal). Signal source142may include, but is not limited to, a waveform generator that may be configured to generate alternating current (AC) signals and/or direct current (DC) signals. In one embodiment, for example, signal source142may be configured to generate an AC signal for transmission to one or more other components. Signal source142may be configured to generate a signal of any suitable frequency. In some embodiments, signal source142may be configured to generate a signal having a frequency of from about 6.5 MHz to about 13.6 MHz. In additional embodiments, signal source142may be configured to generate a signal having a frequency of from about 7.4 to about 8.8 MHz. In further embodiments, signal source142may generate a signal having a frequency as low as 90 kHz or as high as 28 MHz. Signal source142may be configured for direct or indirect electrical communication with an amplifier146. The amplifier may include any suitable device configured to amplify one or more signals generated from signal source142. Amplifier146may include one or more of various types of amplification devices, including, for example, transistor based devices, operational amplifiers, RF amplifiers, power amplifiers, or any other type of device that can increase the gain associated one or more aspects of a signal. The amplifier may further be configured to output the amplified signals to one or more components within external unit120. External unit may120additionally include a memory unit143. Processor144may communicate with memory unit143, for example, to store and retrieve data. Stored and retrieved data may include, for example, information about therapy, parameters and information about implant unit110and external unit120. The use of memory unit143is explained in greater detail below. Memory unit143may be any suitable for of non-transient computer readable storage medium. External unit120may also include communications interface145, which may be provided to permit external unit120to communicate with other devices, such as programming devices and data analysis device. Further details regarding communications interface145are included below. The external unit may additionally include a primary antenna150. The primary antenna may be configured as part of a circuit within external unit120and may be coupled either directly or indirectly to various components in external unit120. For example, as shown inFIG.3, primary antenna150may be configured for communication with the amplifier146. The primary antenna may include any conductive structure that may be configured to create an electromagnetic field. The primary antenna, may further be of any suitable size, shape, and/or configuration. The size, shape, and/or configuration may be determined by the size of the patient, the placement location of the implant unit, the size and/or shape of the implant unit, the amount of energy required to modulate a nerve, a location of a nerve to be modulated, the type of receiving electronics present on the implant unit, etc. The primary antenna may include any suitable antenna known to those skilled in the art that may be configured to send and/or receive signals. Suitable antennas may include, but are not limited to, a long-wire antenna, a patch antenna, a helical antenna, etc. In one embodiment, for example, as illustrated inFIG.3, primary antenna150may include a coil antenna. Such a coil antenna may be made from any suitable conductive material and may be configured to include any suitable arrangement of conductive coils (e.g., diameter, number of coils, layout of coils, etc.). A coil antenna suitable for use as primary antenna150may have a diameter of between about 1 cm and 10 cm, and may be circular or oval shaped. In some embodiments, a coil antenna may have a diameter between 5 cm and 7 cm, and may be oval shaped. A coil antenna suitable for use as primary antenna150may have any number of windings, e.g. 4, 8, 12, or more. A coil antenna suitable for use as primary antenna150may have a wire diameter between about 0.1 mm and 2 mm. These antenna parameters are exemplary only, and may be adjusted above or below the ranges given to achieve suitable results. As noted, implant unit110may be configured to be implanted in a patient's body (e.g., beneath the patient's skin).FIG.2illustrates that the implant unit110may be configured to be implanted for modulation of a nerve associated with a muscle of the subject's tongue130. Modulating a nerve associated with a muscle of the subject's tongue130may include stimulation to cause a muscle contraction. In further embodiments, the implant unit may be configured to be placed in conjunction with any nerve that one may desire to modulate. For example, modulation of the occipital nerve, the greater occipital nerve, and/or the trigeminal nerve may be useful for treating pain sensation in the head, such as that from migraines. Modulation of parasympathetic nerve fibers on and around the renal arteries (i.e. the renal nerves), the vagus nerve, and/or the glossopharyngeal nerve may be useful for treating hypertension. Additionally, any nerve of the peripheral nervous system (both spinal and cranial), including motor neurons, sensory neurons, sympathetic neurons and parasympathetic neurons, may be modulated to achieve a desired effect. Implant unit110may be formed of any materials suitable for implantation into the body of a patient. In some embodiments, implant unit110may include a flexible carrier161(FIG.4) including a flexible, biocompatible material. Such materials may include, for example, silicone, polyimides, phenyltrimethoxysilane (PTMS), polymethyl methacrylate (PMMA), Parylene C, polyimide, liquid polyimide, laminated polyimide, black epoxy, polyether ether ketone (PEEK), Liquid Crystal Polymer (LCP), Kapton, etc. Implant unit110may further include circuitry including conductive materials, such as gold, platinum, titanium, or any other biocompatible conductive material or combination of materials. Implant unit110and flexible carrier161may also be fabricated with a thickness suitable for implantation under a patient's skin. Implant110may have thickness of less than about 4 mm or less than about 2 mm. Other components that may be included in or otherwise associated with the implant unit are illustrated inFIG.3. For example, implant unit110may include a secondary antenna152mounted onto or integrated with flexible carrier161. Similar to the primary antenna, the secondary antenna may include any suitable antenna known to those skilled in the art that may be configured to send and/or receive signals. The secondary antenna may include, any suitable size, shape, and/or configuration. The size, shape and/or configuration may be determined by the size of the patient, the placement location of the implant unit, the amount of energy required to modulate the nerve, etc. Suitable antennas may include, but are not limited to, a long-wire antenna a patch antenna, a helical antenna, etc. In some embodiments, for example, secondary antenna152may include a coil antenna having a circular shape (see alsoFIG.10) or oval shape. Such a coil antenna may be made from any suitable conductive material and may be configured to include any suitable arrangement of conductive coils (e.g., diameter, number of coils, layout of coils, etc. A coil antenna suitable for use as secondary antenna152may have a diameter of between about 5 mm and 30 mm, and may be circular or oval shaped. A coil antenna suitable for use as secondary antenna152may have any number of windings, e.g. 4, 15, 20, 30, or 50. A coil antenna suitable for, use as secondary antenna152may have a wire diameter between about 0.01 mm and 1 mm. These antenna parameters are exemplary only, and may be adjusted above or below the ranges given to achieve suitable results. FIGS.4aand4billustrate an exemplary embodiment of external unit120, including features that may be found in any combination in other embodiments.FIG.4aillustrates a side view of external unit120, depicting carrier1201and electronics housing1202. Carrier1201may include a skin patch configured for adherence to the skin of a subject, for example through adhesives of mechanical means. Carrier1201may be flexible or rigid, or may have flexible portions and rigid portions. Carrier1201and may include a primary antenna150, for example, a double-layer crossover antenna1101such as that illustrated inFIGS.5aand5b. Carrier1201may also include power source140, such as a paper battery, thin film battery, or other type of substantially flat and/or flexible battery. Carrier1201may also include any other type of battery or power source. Carrier1201may also include a connector1203configured for selectively or removably connecting carrier1201to electronics housing1202. Connector1203may extend or protrude from carrier1201. Connector1203may be configured to be received by a recess1204of electronics housing1202Connector1203may be configured as a non-pouch connector, configured to provide a selective connection to electronics housing1204without the substantial use of concave feature. Connector1203may include, for example a peg and may have flexible arms. Connector1203may further include a magnetic connection, a velcro connection, and/or a snap dome connection. Connector1203may also include a locating feature, configured to locate electronics housing1202at a specific height, axial location, and/or axial orientation with respect to carrier1201. A locating feature of connector1203may further include pegs, rings, boxes, ellipses, bumps, etc. Connector1203may be centered on carrier1201, may be offset from the center by a predetermined amount, or may be provided at any, other suitable location of carrier1201. Multiple connectors1203may be provided on carrier1201. Connector1203may be configured such that removal from electronics housing1202causes breakage of connector1203. Such a feature may be desirable to prevent re-use of carrier1201, which may lose some efficacy through continued use. Direct contact between primary antenna150and the skin of a subject may result in alterations of the electrical properties of primary antenna150. This may be due to two effects. First, the skin of a subject is a resistive volume conductor, and creating electrical contact between primary antenna150and the skin may result in the skin becoming part of an electric circuit including the primary antenna. Thus, when primary antenna150is energized, current may flow through the skin, altering the electrical properties of primary antenna150. Second, when the subject sweats, the generated moisture may also act as a resistive conductor, creating electrical pathways that did not exist previously. These effects may occur even when there is no direct contact between the primary antenna150and the skin, for example, when an adhesive layer is interposed between the primary antenna150and the skin. Because many adhesives are not electrically insulating, and may absorb moisture from a subject's skin, these effects can occur without direct contact between the antenna and the skin. In some embodiments, processor144may be configured to detect the altered properties of primary antenna150and take these into account when generating modulation and sub-modulation control signals for transmission to an implant unit110. In some embodiments, carrier1201may include a buffered antenna, as illustrated inFIGS.6a-band22(not drawn to scale), to counteract (e.g., reduce or eliminate) the above-described effects.FIG.6aillustrates an embodiment of carrier1201as viewed from the bottom.FIG.6billustrates an embodiment of carrier1201in cross section. Carrier1201may include one or more structures for separating an antenna from the skin of a subject. In some embodiments, carrier1201may include a buffer layer2150that provides an air gap2160between the skin of a subject and the antenna. Carrier1201may also include a top layer2130and a top center region2140. As illustrated inFIGS.6a-b, buffer layer2150may be disposed on the flexible carrier at a position so as to be between the antenna and the skin of the subject when carrier1201is in use. Buffer layer2150may include any suitable material or structure to provide or establish an air gap2160between the antenna150and the skin of the subject. As used herein, air gap2160may include any space, area, or region between the skin of the subject and antenna150not filled by a solid material. In some embodiments, buffer layer2150may include a single layer. In other embodiments, buffer layer2150may include multiple sub-layers (e.g., two, three, or more sub-layers). In still other embodiments, buffer layer2150may include an extension of one or more structures associated with carrier1201in order to move antenna150away from a subject's skin. The air gap2160provided may be contiguous or may reside within or among various structures associated with buffer layer2150. For example, in some embodiments, air gap2160may include a space or region free or relatively free of, structures, such as air gap2160shown inFIG.6b, which includes an air filled volume created between the skin of the subject and antenna150by the structure of buffer layer2150. In other embodiments, air gap2160may be formed within or between structures associated with buffer layer2150. For example, air gap2160may be formed by one or more porous materials, including open or close cell foams, fibrous mats, woven materials, fabrics, perforated sheet materials, meshes, or any other material or structure having air spaces within boundaries of the material or structure. Further, buffer layer2150may include dielectric materials, hydrophobic closed cell foams, open celled foams, cotton and other natural fibers, porous cellulose based materials, synthetic fibers, and any other material or structure suitable for establishing air gap2160. Air gap2160need not contain only air. Rather, other materials, fluids, or gases may be provided within air gap2160. For example, in some cases, air gap2160may include carbon dioxide, nitrogen, argon, or any other suitable gases or materials. FIGS.6aand6bprovide a diagrammatic depiction of a carrier1201including an exemplary buffer layer2150, consistent with the present disclosure. In the structure shown ifFIGS.6aand6b, air gap2160is provided by a buffer layer2150having multiple sub-layers. Specifically, buffer layer2150may include a separation sub-layer2110and an adhesive sub-layer2120. Separation sub-layer2110, which may or may not be include buffer layer2150, may include any structure for isolating or otherwise separating antenna150from a surface of the subject's skin. In the embodiment shown inFIGS.6aand6b, air gap2160may be established through patterning of adhesive sub-layer2120. For example, as shown, adhesive sub-layer2120may be disposed around a perimeter of separation sub-layer2110, and air gap2160may be established in a region in the middle of adhesive sub-layer2120. Of course, other configurations of adhesive sub-layer2120may also be possible. For example, air gap2160may be formed between any pattern of features associated with adhesive sub-layer2120, including, for example, adhesive stripes, dots, meshes, etc. For example, adhesive sub-layer2120may include a series of discrete adhesive dots or lines, a mesh-pattern of adhesive material, or any other pattern suitable for establishing air gap2160 While in some embodiments air gap2160may be established by adhesive sub-layer2120or by any other sub-layer of buffer layer2150, in other embodiments, air gap2160may be established by separation sub-layer2110. In such embodiments, separation sub-layer2110may be made to include various patterns (e.g., perforations, meshes, islands, bumps, pillars, etc.) to provide air gap2160. Separation sub-layer2110may also be formed of a various types of materials. For example, separation sub-layer2110may include open or closed cell foam, fabric, paper, perforated sheet materials, or any other material suitable for providing air gaps or spaces therewithin. Separation sub-layer2110may be formed of insulating material, such as dielectric material. In some embodiments, buffer layer2150may be formed by extensions of another layer (e.g., a top layer2130) associated with carrier1201. For example, top layer2130may include legs or extension portions that extend below antenna150such that when in use, antenna150is positioned at a location above the subject's skin. Air gap2160may have any suitable dimensions. In some embodiments, air gap2160may be between 250 microns 1 mm in height. In other embodiments air gap2160may be between 1 mm and 5 mm in height. The buffered antenna, as illustrated inFIGS.6aand6bmay serve to electrically insulate and/or isolate primary antenna150from the skin and/or the sweat of a subject, thus eliminating or reducing the alterations to electrical properties of the antenna that may result from contact with the skin and/or sweat of the subject. A buffered antenna may be constructed with either or both of buffered layer2110and air gap2160disposed within window region2150. In some embodiments, carrier1201may be provided with removable tabs, as shown inFIG.7for altering a size of the carrier. Users of carrier1201differ significantly in size and shape. Some users may have larger neck and/or chin areas, some may have smaller. Some users may find require more adhesive area to maintain comfort during a therapeutic period. To accommodate various preferences, carrier1201may be provided with removable tabs2220at either end, wherein the tabs are provided with a perforated detachment portion where they connect to the carrier1201. A user who desires the increased adhesive area may leave the tabs intact, while a user desiring a smaller adhesive area may tear the tabs2220along the perforated detachment portion to remove them. In alternative embodiments, tabs2220may be sized and shape to accommodate the thumbs of a user. In still other embodiments, non-removable tabs sized and shaped to accommodate the thumbs of a user may be provided. In some embodiments, removable tabs2220may be provided without adhesive, to be used during attachment of carrier1201and subsequently removed. Non-adhesive removable tabs2220may permit a user to hold carrier1201without accidentally sticking it to their fingers. Returning now toFIGS.4aand4b, electronics housing1202is illustrated in side view inFIG.4aand in a bottom view inFIG.4b. Electronics housing1202may include electronics portion1205, which may be arranged inside electronics housing1202in any manner that is suitable. Electronics portion1205may include various components, further discussed below, of external unit120. For example, electronics portion1205may include any combination of at least one processor144associated with external unit120, a power source140, such as a battery, a primary antenna152, and an electrical circuit170. Electronics portion1205may also include any other component described herein as associated with external unit120. Additional components may also be recognized by those of skill in the art. Electronics housing1202may include a recess1204configured to receive connector1203. Electronics housing1202may include at least one electrical connector1210,1211,1212. Electrical connectors1210,1211,1212may be arranged with pairs of electrical contacts, as shown inFIG.4b, or with any other number of electrical contacts. The pair of electrical contacts of each electrical connector1210,1211,1212may be continuously electrically connected with each other inside of housing1202, such that the pair of electrical contacts represents a single connection point to a circuit. In such a configuration, it is only necessary that one of the electrical contacts within a pair be connected. Electrical connectors1210,1211, and1212may thus include redundant electrical contacts. The electrical contacts of each electrical connector1210,1211,1212may also represent opposite ends of a circuit, for example, the positive and negative ends of a battery charging circuit. In an exemplary embodiment, as shown inFIG.4b, electrical connectors1210,1211, and1212are configured so as to maintain electrical contact with an exposed electrical contact portion1108independent of an axial orientation of electronics housing1202. Connection between any or all of electrical connectors1210,1211,1212and exposed electrical contact portions1108may thus be established and maintained irrespective of relative axial positions of carrier1201and housing1202. Thus, when connector1203is received by recess1204, housing1202may rotate with respect to carrier1201without interrupting electrical contact between at east one of electrical connectors1210,1211,1212and exposed electrical contact portions1108. Axial orientation independence may be achieved, for example, through the use of circular exposed electrical contact portions1108and each of a pair of contacts of electrical connectors1210,1211,1212disposed equidistant from a center of recess1204at a radius approximately equal to that of a corresponding exposed electrical contact portion1108. In this fashion, even if exposed electrical contact portion1108includes a discontinuous circle, at least one electrical contact of electrical connectors1210,1211, and1212may make contact. InFIG.4b, electrical connectors1210,1211,1212are illustrated as pairs of rectangular electrical contacts. Electrical connectors1210,1211,1212, however, may include any number of contacts, be configured as continuous or discontinuous circles, or have any other suitable shape or configuration. One exemplary embodiment may operate as follows. As shown inFIG.4b, electronics housing1202may include more electrical connectors1210,1211,1212, than a carrier1201includes exposed electrical contact portions1108. In the illustrated embodiments, electronics housing1202includes three electrical connectors1210,1211, and1212, while a double-layer crossover antenna1101includes two exposed electrical contact portions1108. In such an embodiment, two electrical connectors1211and1212may be configured with continuously electrically connected electrical contacts, such that each connector makes contact with a different exposed electrical contact portion1108, where the exposed electrical contact portions1108represent opposite ends of double layer crossover antenna1101. Thus, antenna1101may be electrically connected to the electrical components contained in electronics portion1205. When connected to carrier1201in this configuration electrical connectors1210may not make contact with any electrodes. In this embodiment, electrical connectors1210may be reserved to function as opposite ends of a battery charging circuit, in order to charge a battery contained in electronics portion1205when electronics housing1202is not being used for therapy. A battery charger unit may be provided with a non-breakable connector similar to that of non-pouch connector1203, and configured to engage with recess1204. Upon engaging with recess1204, electrode contacts of the battery charger unit may contact electrical connectors1210to charge a battery contained within electronics portion1205. In an additional embodiment consistent with the present disclosure, an activator chip may include electronics housing1202. Processor144may be configured to activate when at least one of electrical connectors1210,1211,1212contact exposed electrical contact portions1108included in carrier1201. In this manner, an electronics housing1202may be charged and left dormant for many days prior to activation. Simply connecting electronics housing1202to carrier1201(and inducing contact between an electrical connector1210,1211,1212and an electrode portion1108) may cause the processor to activate. Upon activation, processor144may be configured to enter a specific mode of operation, such as a calibration mode (for calibrating the processor after placement of the carrier on the skin), a placement mode (for assisting a user to properly place the carder on the skin), and/or a therapy mode (to begin a therapy session). The various modes of processor144may include waiting periods at the beginning, end, or at any time during. For example, a placement mode may include a waiting period at the end of the mode to provide a period during which a subject may fall asleep. A therapy mode may include a similar waiting period at the beginning of the mode. Additionally or alternatively, processor144may be configured to provide waiting periods separate from the described modes, in order to provide a desired temporal spacing between system activities. In some embodiments, housing1202may include features to communicate with a user. For example, one or more LED lights and/or one or more audio devices may be provided. LEDs and audio devices may be provided to communicate various pieces of information to a user, such as low battery warnings, indications of activity, malfunction alerts, indications of connectivity (e.g. connections to electrical components on carrier1201). Another embodiment consistent with the present disclosure may include a flexible electronics housing1802.FIGS.8a-8fillustrates an embodiment including a flexible electronics housing1802. Utilizing flexible electronics housing1802may provide benefits with respect to the size and shape of the electronics housing component. An electronics housing must be large enough to accommodate the various components contained inside, such as electronic circuitry and a battery. It may be beneficial to house the necessary components in a flexible electronics housing1802with increased lateral dimensions and decreased vertical dimensions, in order to create a more comfortable experience for a user. A lower profile flexible electronics housing1802may also be less likely to catch its edges on bedclothes during a sleeping period. Additionally, when increasing lateral dimensions, it may be beneficial for the housing to be flexible, so as to better conform to the body contour of the wearer. Flexible electronics housing1802may be achieved through the use of flexible components, such as a flexible circuit board1803accommodating processor144. Flexible electronics housing1802may be between 10 and 50 mm in height, and may be at least three times wider in a lateral dimension than in a height dimension. In one embodiment, flexible electronics housing1802may be elliptical in shape, 14 mm high and having elliptical diameters of 40 mm and 50 mm. Flexible electronics housing1802may further include all of the same functionality and components as described above with respect to electronics housing1202, for example, battery1804, electrical connectors1805(not shown) and recess1806. Flexible electronics housing1802may also be configured to contain a primary antenna. Recess1806may be a connection portion configured to engage with a non-pouch connector1203of carrier1201. Some embodiments may include a plurality of recesses1806, for example, two or four recesses located near edges of the housing, as shown inFIG.8b, or a centrally located recess and a plurality of recess located near edges of the housing, as shown inFIG.8c. The flexibility of flexible electronics housing1802may permit the housing to better conform to the contours of a patient's body when secured via connector1203and carrier1201. Flexible electronics housing1802may include rigid portion1807in the center in which electrical connectors1805are located. Rigid portion1807may be substantially inflexible. Rigid portion1807may ensure that electrical connectors1805maintain contact with exposed electrical contact portions1108of carrier1201. Rigid portion1807may also accommodate a rigid battery1804, or any other component in the housing required to be rigid. In some embodiments, battery1804may provide the structure that ensures the rigidity of rigid portion1807. Any combination of the components within flexible housing1802may be flexible and/or rigid as required. It is not necessary for flexible electronics housing1802to maintain contact with carrier1201in portions away from electrical connectors1805and exposed electrical contact portions1108. For example, if carrier1201is contoured to a body a subject, and bends away from flexible electronics housing1802, electrical communication may be maintained through rigid portion1807, as illustrated, for example, inFIG.8e. In some embodiments, each end of flexible housing1802may be configured to flex as much as sixty degrees away from a flat plane. In embodiments that include rigid portion1807, bending may begin at a portion immediately outside of rigid portion1807.FIG.8fillustrates a flexible housing1802including a rigid portion1807with flexed ends bent at an angle α. Flexible housing1802may be constructed of any suitable flexible material, such as, for example, silicone, PMMA, PEEK, polypropylene, and polystyrene. Flexible housing1802may be constructed from a top portion and a bottom portion, with the components being placed inside prior to sealing the top portion to the bottom portion. Flexible housing1802may also be constructed through overmolding techniques, wherein a flexible material is molded over and around the required interior components. Flexible housing1802may be manufactured with additives, for example to include particulate substances to provide color or ferrite substances, which may reflect and/or absorb a radiofrequency signal produced by a primary antenna contained within flexible housing1802. A ferrite additive1843in flexible housing1802may increase the efficiency of the primary antenna and/or may reduce excess external transmissions by reflecting and/or absorbing the radiofrequency signal. In some embodiments consistent with the present disclosure, electrical communication between carrier1201and an electronics housing may be made through electrical contacts1810located on a protruding non-pouch connector1811, as illustrated inFIG.8d. Electrical contacts1810may be disposed circumferentially on non-pouch connector1811and located at different heights. In such an embodiment, a connection portion of the electronics housing may be configured to receive electrical contacts configured in this fashion. In many of the examples described above, external unit120includes an electronics housing and an adhesive carrier to which the housing may be releasably connected. The examples provided are intended to be exemplary only, and are not intended to limit the placement or location of any of the components described. Additional embodiments including the location of various components on either the housing or the carrier may be realized without departing from the scope of the invention. For example, in some embodiments, some or all of the required circuit component may be printed on the carrier. In some embodiments, the primary antenna may be contained within the housing. In some embodiments, a flexible battery, such as a paper battery, may be included on the carrier to replace or supplement a battery contained in the housing. In some embodiments, external control unit120may be configured for remote monitoring and control. In such an embodiment, electronics housing1202may include, in addition to any or all of the elements discussed above, a communications interface145, and memory unit143. Communications interface145may include a transceiver, configured for both transmitting and receiving, a transmitter-receiver, a transmitter alone, and a receiver alone. Processor144may be configured to utilize communications interface145to communicate with a location remote from the control unit to transmit and/or receive information which may be retrieved from and/or stored in memory unit143. Processor144may be configured to cause application of a control signal to primary antenna150. Processor144may further be configured to monitor a feedback signal indicative of a subject's breathing. Such a feedback signal may include a coupled feedback signal developed on the primary antenna150through wireless interaction with the secondary antenna152. Further details regarding the coupled feedback signal are provided below. Processor144may then store information associated with or about both the control signal and the coupled feedback signal in the memory, and may utilize the communications interface145to transmit the stored information to a remote location. Processor144may also store information about the external unit, for example, information about battery depletion and energy expenditure. Processor144may also be configured to transmit collected information about the control signal, the feedback signal, and/or the external unit without first putting the information into storage. In such an embodiment, processor144may cause transmission of collected information via the communications interface145as that information is received. Thus, in some embodiments, external unit120may not require a memory. In some embodiments, processor144may be configured to monitor a feedback signal provided by alternative means, such as electromyography electrodes, thermistors, accelerometers, microphones, piezoelectric sensors, etc., as previously described. Each of these means may provide a feedback signal that may be indicative of a subject's breathing. A thermistor, for example, may provide a signal that relates to a temperature of a subject's expired air, inspired air, or a subject's skin, which may be indicative of breathing. Electromyography electrodes may provide a feedback signal indicative of breathing based on the detection of muscle contractions. An accelerometer may provide a signal indicative of breathing by measuring a speed or rate at which parts of the subject's body, such as a chest or chin, moves. Microphones may be used to provide feedback signals, for example, by detecting acoustic variations coincident with a breathing pattern. Finally, piezoelectric sensors, for example, may be used to measure muscle movement. The information associated with or about the control signal and the feedback signal may include information about a patient's therapy. Information about the control signal may include a complete history and/or any portion thereof of control signal transmissions caused by the processor. Information about the feedback signal may include a complete history and/or any portion thereof of feedback signals measured, such as a history of coupled feedback signals developed on primary antenna150. Information associated with the feedback signal may include information about a usage period of the control unit, energy expenditure of the control unit, tongue movement, sleep disordered breathing occurrence, e.g. the occurrence of steep apnea, hypopnea, and/or snoring, battery depletion of the control unit, and information about tongue movement in response to the modulation signal. Together, the collected information may represent a complete history of a patient's therapy session. The control signal information and feedback signal information may be stored in a synchronized fashion, to ensure that subsequent data processing can determine which portions of each signal occurred at the same time. A few examples of information that may be contained in control signal and feedback signal information are described below. As noted above, however, the memory may store complete information about control signal transmissions and feedback signals. Thus, the storage and/or transmission of any portion of these signals or any data describing them is also contemplated. In some embodiments, information about the control signal may include summarizing information, for example a number of times or frequency which the control signal was utilized to induce nerve modulation. Information about the control signal may include strength, duration and other descriptive parameters of the control signal, at both modulation and sub-modulation levels. The information transmitted and received during communication with the remote location may include information about a coupled feedback signal. Information about the feedback signal may include information indicative of a patient's tongue movement or motion and information indicative of a frequency or duration of sleep disordered breathing events. In some embodiments, the stored information may be information that combines control signal information and feedback signal information, for example, information that describes a patient response to nerve modulation signals. The stored information may be transmitted to a location remote from control unit120via a communications interface145. Communications interface145may include a transceiver configured to send and receive information. The transceiver may utilize various transmission methods known in the art, for example wi-fi, Bluetooth, radio, RFID, smart chip or other near field communication device, and any other method capable of wirelessly transmitting information. Communications interface145or transceiver may also be configured to transmit the stored information through a wired electrical connection. The transmitted information may be received by a remote location. A remote location suitable for receipt of the transmitted information may function as a relay station, or may be a final destination. A final destination, for example, may include a centralized server location. External unit120may transmit the stored information to a relay station device which may then transmit the information to another relay station device or final destination. For example, a relay station device may include a patient's mobile device, smartphone, home computer, and/or a dedicated relay unit. A dedicated relay unit may include an antenna situated beneath a patient's pillow, for example to permit the transmission of a signal across a signal in circumstances where communications interface145may not be powerful enough or large enough to transmit a signal more than a few inches or feet. In some embodiments a dedicated relay unit may also include a medical device console, described in greater detail below with respect toFIG.9, configured to receive information transmitted by communications interface145. The relay station device may receive the transmitted information and may store it prior to transmitting it, via, for example, any known communication technique, to a final destination. For example, the relay station may receive information from the external unit on a nightly basis, but only establish a connection with a final destination on a weekly basis. The relay station may also perform analysis on the received information prior to establishing a connection with a final destination. In some embodiments, a relay station device may relay received information immediately as it is received, or as soon as connection with the final destination can be established. In some embodiments, external control unit120may be programmable and reprogrammable. For example, as described above, a memory included with external control unit120may store information associated with or about the control signal and the coupled feedback signal and may include information about therapy a patient has undergone. Further, a memory included with an external control unit120may be a programmable and/or reprogrammable memory configured to store information associated with at least one characteristic of sleep disordered breathing exhibited by a subject. Processor144may utilize the information associated with at least one characteristic of sleep disordered breathing to generate a hypoglossal nerve modulation control signal based on the information. That is, processor144may determine modulation parameters based on information about a patient's sleep disordered breathing characteristics. In some embodiments, such information may be determined by physicians, for example through the use of sleep lab equipment such as EKGs, EEGs, EMGs, breathing monitors, blood oxygen monitors, temperature monitors, brain activity monitors, cameras, accelerometers, electromyography equipment, and any other equipment useful for monitoring the sleep of a patient, and programmed into the memory. In some embodiments, such information may be determined by processor144by monitoring of the control signal and the coupled feedback signal. As described above, external control unit120may include components that permit the recording, storage, reception, and transmission of information about a patient's sleep breathing patterns, about any therapy administered to the patient during sleep, and about the response of a patient's sleep breathing patterns to administered therapy. Such information may be stored for later transmission, may be transmitted as it is received or shortly thereafter, may be received and stored for later use, and/or may be utilized by processor144as it is received or shortly thereafter. This information may be generated by processor144through monitoring of a control signal transmitted to an implant unit110and a coupled feedback signal received therefrom and/or through other means described herein for processor144to collect feedback, such as electromyography electrodes, piezoelectric sensors, audio sensors, thermistors, and accelerometers. This information may also be generated through various equipment at the disposal of physicians in, for example, a sleep lab. This stored information may be utilized, for example by processor144or by software running on a standard computer, to determine parameters of a hypoglossal nerve modulation control signal specific to a certain patient, based on the collected information. In an embodiment where parameters are determined by outside of external control unit120, such parameters may be received by communications interface145of external control unit120as described above. Some examples describing the use of these capabilities is included below. In an embodiment for determining initial modulation parameters for a patient, the above described system may operate as follows. After undergoing a surgical procedure to receive an implant unit110, a patient may visit a sleep lab to determine initial modulation control signal parameters, such as pulse frequency, amplitude train length, etc. Modulation control signal parameters may include pulse train parameters, described in greater detail below with respect toFIG.17. A physician may use an endoscope to inspect an awake patient's airway during hypoglossal nerve modulation to determine that implant unit110is able to effectively cause airway dilation. Then, the patient may go to sleep in the sleep lab while being monitored by the physician. The patient's sleep may be monitored through a variety of tools available in a sleep lab, such as EKGs, EEGs, EMGs, breathing monitors, blood oxygen monitors, temperature monitors, brain activity monitors, cameras, electromyography electrodes, and any other equipment useful for monitoring the sleep of a patient. The monitoring equipment may be used to determine a patient's quality of sleep and to determine the onset of sleep disordered breathing. The physician may also monitor the patients sleep through the use of external unit120. Through a wireless or wired communication set up through communications interface145with processor144, the physician may also monitor information gathered by external unit120, e.g. modulation and sub-modulation control signals, feedback signals, battery levels, etc. Through communications interface145, the physician may also control the modulation and sub-modulation signals generated by processor144. Thus, a physician may, through information gathered by sleep lab equipment and external unit120, monitor a patients sleep breathing patterns, including instances of sleep disordered breathing, and, in response to the monitored information, update the programming of processor144to optimize the therapy delivered to the patient in order to reduce instances of sleep disordered breathing. That is, processor144may be programmed to use a control signal that is tailored to cause optimum modulation, based on any or all of the information collected. In embodiments involving the application of a continuous modulation pulse train, such optimization may include selecting parameters, such as the frequency, amplitude, and duration of modulation pulses. For example, a physician observing a high frequency of sleep disordered breathing occurrences may adjust the parameters of a modulation pulse train until the sleep disordered breathing occurrences are reduced in number or stop altogether. The physician, thus, may be able to program processor144to effectively modulate the hypoglossal nerve to stop or minimize sleep disordered breathing without stimulating any more than necessary. In some embodiments, the modulation pulse train may not be programmed with constant parameter values, but may be programmed to change during the course of an evening, or therapy period. Constant modulation signals, whether they are constant in amplitude, duration, and/or frequency of modulation pulses may result in diminishing sensitivity or response to modulation signals over time. For example, muscular contractions in response to a constant modulation signal may be reduced over time. Over the course of a therapy period, the muscular contractions resulting from a steady pulse train may be diminished, which may, in turn, cause an increase in sleep disordered breathing events. In order to counteract this effect, a pulse train may be dynamically modified during a therapy period via a plurality of predetermined alterations to the pulse train of a modulation control signal. For example, processor144may be programmed to alter at least one characteristic of the modulation pulse train, e.g., to increase, decrease, or otherwise alter the amplitude, duration and/or frequency of modulation pulses over the course of a therapy period. Any and all characteristics of a pulse train of a modulation control signal may be altered over the course of therapy period to increase modulation efficacy. As described above, physician monitored therapy periods may be utilized to determine an optimal pattern of alterations to the modulation control signal. In embodiments involving selective modulation based on the detection of sleep disordered breathing precursors, such optimization may include selecting not only modulation parameters, which may be selected so as to vary with time over the course of a therapy period, but also feedback parameters and thresholds consistent with a sleep disordered breathing determination. For example, a physician may compare indications of tongue movement collected by external unit120with extrinsic indicator of sleep disordered breathing from sleep lab equipment. The physician may then correlate observed sleep disordered breathing patterns with detect tongue movement patterns, and program processor144to generate a modulation control signal when those tongue movement patterns are detected. In some embodiments, the actions of the physician as described above may be performed by on a computer running software dedicated to the task. A computer system may be programmed to monitor the sleep breathing patterns of a patient and to program, reprogram, and/or update the programming of processor144accordingly. The present disclosure contemplates several additional embodiments for the updating of modulation parameters. In one embodiment, a patient, utilizing the sleep disordered breathing therapy system at home, may have their equipment updated based on nightly data collection. As described above, communications interface145of external unit120may transmit information either to a relay station or directly to a final destination on a regular basis, monthly, weekly, daily, and even hourly or constantly. In some embodiments, the communications interface145of external unit120may be configured to transmit information based on certain thresholds, for example, if a number of sleep disordered breathing occurrences exceeds a predetermined number. At the final destination, which may be remote location, e.g. a physician's office, or console device in the patient's home, the collected information may be analyzed in any of the ways described above and used to determine new modulation parameters, to be transmitted, via the communications interface145, back to the patient's external120. Thus, the patient's sleep may be monitored on a regular basis, either through automated software or with the aid of a physician, and the patients therapy may be updated accordingly. In some embodiments, the information may be transferred to a relay station device or to a final destination when the patient places external unit120in a charging device. For example, a medical console device, illustrated inFIG.9, may be provided with an electrical interface955configured to receive therapy information from a patients external unit120. Medical console device950may further include a data storage unit956for storing the therapy information and at least one processing device957for analyzing the therapy information and determining updated control parameters for external unit120. Medical console device950may transmit updated control parameters to communications interface145of external unit120via electrical interface955. Such communication may be wired, or may be wireless transmission through any known means, such as wi-fi, bluetooth, RFID, etc. The information may then be processed by the console, or transmitted to a final destination for processing. Transmission to a final destination may be accomplished, for example, via the internet, wireless connection, cellular connection, or any other suitable transmission means. The information may be used to determine updated modulation parameters for processor144, either by the medical console device950or by a different final destination. In some embodiments, external unit120may be disposable. In such embodiments, processor144may be programmed with a patients particular therapy regime through connection, wireless or wired, to the medical console device950prior to therapy. In some embodiments, a medical console device may be configured to transmit modulation parameters to several disposable external units120at the same time. In some embodiments, external unit120may be recharged via electrical interface955, in either a wired or wireless fashion. In some embodiments, medical console device950may be configured for bedside use, and may include, for example, all of the functions of a standard alarm clock/radio. In some embodiments, information collected and transmitted by external control unit120may be used to monitor patient compliance. For example, by monitoring information such as battery depletion, modulation frequency, and any other parameter discussed herein, a physician may be able to determine whether or not a patient is complying with a therapy regime. Physicians may use this information to follow up with patient's and alter therapy regimes if necessary. In some embodiments, information collected and transmitted by external control unit120may be used to monitor system efficacy. For example, it may be difficult for a patient to determine how successful therapy is, as they sleep during therapy periods. The equipment and components described herein may be used to provide information to a patient and/or their physician about the effectiveness of treatment. Such information may also be used to determine effectiveness of the implant unit110specifically. For example, if levels of nightly battery depletion increase without a corresponding increase in the frequency of modulation, it may be indicative of a problem with implant unit110or its implantation. Implant unit110may additionally include a plurality of field-generating implant electrodes158a,158b. The electrodes may include any suitable shape and/or orientation on the implant unit so long as the electrodes may be configured to generate an electric field in the body of a patient. Implant electrodes158aand158bmay also include any suitable conductive material (e.g., copper, silver, gold, platinum, iridium, platinum-iridium, platinum-gold, conductive polymers, etc.) or combinations of conductive (and/or noble metals) materials. In some embodiments, for example, the electrodes may include short line electrodes, circular electrodes, and/or circular pairs of electrodes. As shown inFIG.10, electrodes158aand158bmay be located on an end of a first extension162aof an elongate arm162. The electrodes, however, may be located on any portion of implant unit110. Additionally, implant unit110may include electrodes located at a plurality of locations, for example on an end of both a first extension162aand a second extension162bof elongate arm162, as illustrated, for example, inFIG.11a. Positioning electrodes on two extensions of elongate arm162may permit bilateral hypoglossal nerve stimulation, as discussed further below. Implant electrodes may have a thickness between about 200 nanometers and 1 millimeter. Anode and cathode electrode pairs may be spaced apart by about a distance of about 0.2 mm to 25 mm. In additional embodiments, anode and cathode electrode pairs may be spaced apart by a distance of about 1 mm to 10 mm, or between 4 mm and 7 mm. Adjacent anodes or adjacent cathodes may be spaced apart by distances as small as 0.001 mm or less, or as great as 25 mm or more. In some embodiments, adjacent anodes or adjacent cathodes may be spaced apart by a distance between about 0.2 mm and 1 mm. FIG.10provides a schematic representation of an exemplary configuration of implant unit110. As illustrated inFIG.10, in one embodiment, the field-generating electrodes158aand158bmay include two sets of four circular electrodes, provided on flexible carrier161, with one set of electrodes providing an anode and the other set of electrodes providing a cathode. Implant unit110may include one or mom structural elements to facilitate implantation of implant unit110into the body of a patient. Such elements may include, for example, elongated arms, suture holes, polymeric surgical mesh, biological glue, spikes of flexible carrier protruding to anchor to the tissue, spikes of additional biocompatible material for the same purpose, etc. that facilitate alignment of implant unit110in a desired orientation within a patient's body and provide attachment points for securing implant unit110within a body. For example, some embodiments, implant unit110may include an elongate arm162having a first extension162aand, optionally, a second extension162b. Extensions162aand162bmay aid in orienting implant unit110with respect to a particular muscle (e.g., the genioglossus muscle), a nerve within a patient's body, or a surface within a body above a nerve. For example, first and second extensions162a,162bmay be configured to enable the implant unit to conform at least partially around soft or hard tissue (e.g., nerve, bone, or muscle, etc.) beneath a patient's skin. Further, implant unit110may also include one or more suture holes160located anywhere on flexible carrier161. For example, in some embodiments, suture holes160may be placed on second extension162bof elongate arm162and/or on first extension162aof elongate arm162. Implant unit110may be constructed in various shapes. Additionally, or alternatively, implant unit110may include surgical mesh1050or other perforatable material, described in greater detail below with respect toFIG.12. In some embodiments, implant unit may appear substantially as illustrated inFIG.10. In other embodiments, implant unit110may lack illustrated structures such as second extension162b, or may have additional or different structures in different orientations. Additionally, implant unit110may be formed with a generally triangular, circular, or rectangular shape, as an alternative to the winged shape shown inFIG.10. In some embodiments, the shape of implant unit110(e.g., as shown inFIG.10) may facilitate orientation of implant unit110with respect to a particular nerve to be modulated. Thus, other regular or irregular shapes may be adopted in order to facilitate implantation in differing parts of the body. As illustrated inFIG.10, secondary antenna152and electrodes158a,158bmay be mounted on or integrated with flexible carrier161. Various circuit components and connecting wires may be used to connect secondary antenna with implant electrodes158aand158b. To protect the antenna, electrodes, and implantable circuit components from the environment within a patient's body, implant unit110may include a protective coating that encapsulates implant unit110. In some embodiments, the protective coating may be made from a flexible material to enable bending along with flexible carrier161. The encapsulation material of the protective coating may also resist humidity penetration and protect against corrosion. In some embodiments, the protective coating may include a plurality of layers, including different materials or combinations of materials in different layers. In some embodiments of the present disclosure, the encapsulation structure of implanted unit may include two layers. For example, a first layer may be disposed over at least a portion of the implantable circuit arranged on the substrate, and a second layer may be disposed over the first layer. In some embodiments, the first layer may be disposed directly over the implantable circuit, but in other embodiments, the first layer may be disposed over an intervening material between the first layer and the implantable circuit. In some embodiments, the first layer may provide a moisture barrier and the second layer may provide a mechanical protection (e.g., at least some protection from physical damage that may be caused by scratching, impacts, bending, etc.) for the implant unit. The terms “encapsulation” and “encapsulate” as used herein may refer to complete or partial covering of a component. In some embodiments component may refer to a substrate, implantable circuit, antenna, electrodes, any parts thereof, etc. The term “layer” as used herein may refer to a thickness of material covering a surface or forming an overlying part or segment. The layer thickness can be different from layer to layer and may depend on the covering material and the method of forming the layer. For example, a layer disposed by chemical vapor may be thinner than a layer disposed through other methods. Other configurations may also be employed. For example, another moisture barrier may be formed over the outer mechanical protection layer. In such embodiments, a first moisture barrier layer (e.g., parylene) may be disposed over (e.g., directly over or with intervening layers) the implantable circuit, a mechanical protection layer (e.g., silicone) may be formed over the first moisture barrier, and second moisture barrier (e.g., parylene) may be disposed over the mechanical protection layer. FIG.11ais a perspective view of an alternate embodiment of an implant unit110, according to an exemplary embodiment of the present disclosure. As illustrated inFIG.11a, implant unit110may include a plurality of electrodes, located, for example, at the ends of first extension162aand second extension162b.FIG.11aillustrates an embodiment wherein implant electrodes158aand158binclude short line electrodes. FIG.11billustrates another alternate embodiment of implant unit810, according to an exemplary embodiment of the present disclosure. Implant unit810is configured such that circuitry880is located in a vertical arrangement with secondary antenna852. Implant unit810may include first extension162aand second extension162b, wherein one or both of the extensions accommodate electrodes158aand158b. FIG.12illustrates another exemplary embodiment of encapsulated implant unit110. Exemplary embodiments may incorporate some or all of the features illustrated inFIG.10as well as additional features. A protective coating of implant unit110may include a primary capsule1021. Primary capsule1021may encapsulate the implant unit110and may provide mechanical protection for the implant unit110. For example, the components of implant unit110may be delicate, and the need to handle the implant unit110prior to implantation may require additional protection for the components of implant unit110, and primary capsule1021may provide such protection. Primary capsule1021may encapsulate all or some of the components of implant unit110. For example, primary capsule1021may encapsulate antenna152, flexible carrier161, and implantable circuit180. The primary capsule may leave part or all of electrodes158a,158bexposed enabling them to deliver energy for modulating a nerve unimpeded by material of the primary capsule. In alternative embodiments, different combinations of components may be encapsulated or exposed. Primary capsule1021may be fashioned of a material and thickness such that implant unit110remains flexible after encapsulation. Primary capsule1021may include any suitable bio-compatible material, such as silicone, or polyimides, phenyltrimethoxysilane (PTMS), polymethyl methacrylate (PMMA), Parylene C, liquid polyimide, laminated polyimide, polyimide, Kapton, black epoxy, polyether ketone (PEEK), Liquid Crystal Polymer (LCP), or any other suitable biocompatible coating. In some embodiments, all or some of the circuitry components included in implant110may be housed in a rigid housing, as illustrated inFIGS.13a-b. Rigid housing1305may provide the components of implant110with additional mechanical and environmental protections. A rigid housing may protect the components of implant110from physical trauma during implantation or from physical trauma caused by the tissue movement at an implantation site. Rigid housing may also provide additional environmental protections from the corrosive environment within the body. Furthermore, the use of a rigid housing may simplify a process for manufacturing implant unit110. FIGS.13a-billustrates an embodiment including an implant unit110with a rigid housing. As shown inFIGS.13a-b, implant unit110may include all of the components of implant unit110, e.g. modulation electrodes158a,158b, secondary antenna152, flexible carrier161, extension arms162a,162b, as well as circuitry180and any other component described herein. Some, or all, of these components, e.g. circuitry180, may be included inside rigid housing1305. Rigid housing130may be constructed, for example, of ceramic, glass, and/or titanium, and may include a ceramic clamshell. Rigid housing130may, for example be welded closed with a biocompatible metal such as gold or titanium, or closed with any other suitable methods. Such a housing may also include a ceramic bottom portion1306and a titanium or ceramic upper portion1307. Rigid housing1305may include one or more conductive feedthroughs1308to make contact with circuitry on flexible carrier161. Inside the housing, conductive feedthroughs1308may be soldered, welded, or glued to circuitry180, or any other internal component, through traditional soldering techniques. Conductive feedthroughs1308may comprise gold, platinum, or any other suitable conductive material. In one embodiment, rigid housing1305may include four feedthroughs1308comprising positive and negative connections for the modulation electrodes158a,158b, and the secondary antenna152. Of course, any suitable number of feedthroughs1308may be provided. Rigid housing1308may be mounted to flexible carrier61through controlled collapse chip connection or C4 manufacturing. Using this technique, external portions1309of each conductive feedthrough1308, which extend beyond the surface of rigid housing1308, may be aligned with solder bumps on flexible carrier161. Solder bumps may, in turn, connected to the electrical traces of flexible carrier161. Once aligned, the solder is caused to reflow, creating an electrical connection between the electrical traces of flexible carrier161and the internal components of rigid housing1305via feedthroughs1308. Once the electrical connection has been made, a non-conductive, or insulative, adhesive1310may be used to fill the gaps between the rigid housing and the flexible carrier in and around the soldered connections. The insulative adhesive1310may provide both mechanical protection to ensure that rigid housing1305does not separate from flexible carrier161, as well as electrical protection to ensure that the feedthroughs1308do not short to each other. Once mounted to flexible carrier161, rigid housing1305and flexible carrier161may be encapsulated together via a multi-layer encapsulation structure described above. Returning now toFIG.12, also illustrated is encapsulated surgical mesh1050. Surgical mesh1050may provide a larger target area for surgeons to use when suturing implant unit110into place during implantation. The entire surgical mesh1050may be encapsulated by primary capsule1021, permitting a surgeon to pass a needle through any portion of the mesh without compromising the integrity of implant unit110. Surgical mesh1050may additionally be used to cover suture holes160, permitting larger suture holes160that may provide surgeons with a greater target area. Surgical mesh1050may also encourage surrounding tissue to bond with implant unit110. In some embodiments, a surgeon may pass a surgical suture needle through suture holes160, located on one extension162aof an elongate arm162of implant unit110, through tissue of the subject, and through surgical mesh1050provided on a second extension162bof elongate arm162of implant unit110. In this embodiment, the larger target area provided by surgical mesh1050may facilitate the suturing process because it may be more difficult to precisely locate a suture needle after passing it through tissue. Implantation and suturing procedures may be further facilitated through the use of a delivery tool, described in greater detail below. Returning toFIGS.2and3, external unit120may be configured to communicate with implant unit110. For example, in some embodiments, a primary signal may be generated on primary antenna150, using, e.g., processor144, signal source142, and amplifier146. More specifically, in one embodiment, power source140may be configured to provide power to one or both of the processor144and the signal source142. The processor144may be configured to cause signal source142to generate a signal (e.g., an RF energy signal). Signal source142may be configured to output the generated signal to amplifier146, which may amplify the signal generated by signal source142. The amount of amplification and, therefore, the amplitude of the signal may be controlled, for example, by processor144. The amount of gain or amplification that processor144causes amplifier146to apply to the signal may depend on a variety of factors, including, but not limited to, the shape, size, and/or configuration of primary antenna150, the size of the patient, the location of implant unit110in the patient, the shape, size, and/or configuration of secondary antenna152, a degree of coupling between primary antenna150and secondary antenna152(discussed further below), a desired magnitude of electric field to be generated by implant electrodes158a,158b, etc. Amplifier146may output the amplified signal to primary antenna150. External unit120may communicate a primary signal on primary antenna to the secondary antenna152of implant unit110. This communication may result from coupling between primary antenna150and secondary antenna152. Such coupling of the primary antenna and the secondary antenna may include any interaction between the primary antenna and the secondary antenna that causes a signal on the secondary antenna in response to a signal applied to the primary antenna. In some embodiments, coupling between the primary and secondary antennas may include capacitive coupling, inductive coupling, radiofrequency coupling, etc. and any combinations thereof. Coupling between primary antenna150and secondary antenna152may depend on the proximity of the primary antenna relative to the secondary antenna. That is, in some embodiments, an efficiency or degree of coupling between primary antenna150and secondary antenna152may depend on the proximity of the primary antenna to the secondary antenna. The proximity of the primary and secondary antennas may be expressed in terms of a coaxial offset (e.g., a distance between the primary and secondary antennas when central axes of the primary and secondary antennas are co-aligned), a lateral offset (e.g., a distance between a central axis of the primary antenna and a central axis of the secondary antenna), and/or an angular offset (e.g., an angular difference between the central axes of the primary and secondary antennas). In some embodiments, a theoretical maximum efficiency of coupling may exist between primary antenna150and secondary antenna152when both the coaxial offset, the lateral offset, and the angular offset are zero. Increasing any of the coaxial offset, the lateral offset, and the angular offset may have the effect of reducing the efficiency or degree of coupling between primary antenna150and secondary antenna152. As a result of coupling between primary antenna150and secondary antenna152, a secondary signal may arise on secondary antenna152when the primary signal is present on the primary antenna150. Such coupling may include inductive/magnetic coupling, RF coupling/transmission, capacitive coupling, or any other mechanism where a secondary signal may be generated on secondary antenna152in response to a primary signal generated on primary antenna150. Coupling may refer to any interaction between the primary and secondary antennas. In addition to the coupling between primary antenna150and secondary antenna152, circuit components associated with implant unit110may also affect the secondary signal on secondary antenna152. Thus, the secondary signal on secondary antenna152may refer to any and all signals and signal components present on secondary antenna152regardless of the source. While the presence of a primary signal on primary antenna150may cause or induce a secondary signal on secondary antenna152, the coupling between the two antennas may also lead to a coupled signal or signal components on the primary antenna150as a result of the secondary signal present on secondary antenna152. A signal on primary antenna150induced by a secondary signal on secondary antenna152may be referred to as a primary coupled signal component. The primary signal may refer to any and all signals or signal components present on primary antenna150, regardless of source, and the primary coupled signal component may refer to any signal or signal component arising on the primary antenna as a result of coupling with signals present on secondary antenna152. Thus, in some embodiments, the primary coupled signal component may contribute to the primary signal on primary antenna150. Implant unit110may be configured to respond to external unit120. For example, in some embodiments, a primary signal generated on primary coil150may cause a secondary signal on secondary antenna152, which in turn, may cause one or more responses by implant unit110. In some embodiments, the response of implant unit110may include the generation of an electric field between implant electrodes158aand158b. FIG.14illustrates circuitry170that may be included in external unit120and circuitry180that may be included in implant unit110. Additional, different, or fever circuit components may be included in either or both of circuitry170and circuitry180. As shown inFIG.14, secondary antenna152may be arranged in electrical communication with implant electrodes158a,158b. In some embodiments, circuitry connecting secondary antenna152with implant electrodes158aand158bmay cause a voltage potential across implant electrodes158aand158bin the presence of a secondary signal on secondary antenna152. This voltage potential may be referred to as a field inducing signal, as this voltage potential may generate an electric field between implant electrodes158aand158b. More broadly, the field inducing signal may include any signal (e.g., voltage potential) applied to electrodes associated with the implant unit that may result in an electric field being generated between the electrodes. The field inducing signal may be generated as a result of conditioning of the secondary signal by circuitry180. As shown inFIG.6, circuitry170of external unit120may be configured to generate an AC primary signal on primary antenna150that may cause an AC secondary signal on secondary antenna152. In certain embodiments, however, it may be advantageous (e.g., in order to generate a unidirectional electric field for modulation of a nerve) to provide a DC field inducing signal at implant electrodes158aand158b. To convert the AC secondary signal on secondary antenna152to a DC field inducing signal, circuitry180in implant unit110may include an AC-DC converter. The AC to DC converter may include any suitable converter known to those skilled in the art. For example, in some embodiments the AC-DC converter may include rectification circuit components including, for example, diode156and appropriate capacitors and resistors. In alternative embodiments, implant unit110may include an AC-AC converter, or no converter, in order to provide an AC field inducing signal at implant electrodes158aand158b. As noted above, the field inducing signal may be configured to generate an electric field between implant electrodes158aand158b. In some instances, the magnitude and/or duration of the generated electric field resulting from the field inducing signal may be sufficient to modulate one or more nerves in the vicinity of electrodes158aand158b. In such cases, the field inducing signal may be referred to as a modulation signal. In other instances, the magnitude and/or duration of the field inducing signal may generate an electric field that does not result in nerve modulation. In such cases, the field inducing signal may be referred to as a sub-modulation signal. Various types of field inducing signals may constitute modulation signals. For example, in some embodiments, a modulation signal may include a moderate amplitude and moderate duration, while in other embodiments, a modulation signal may include a higher amplitude and a shorter duration. Various amplitudes and/or durations of field-inducing signals across electrodes158a,158bmay result in modulation signals, and whether a field-inducing signal rises to the level of a modulation signal can depend on many factors (e.g., distance from a particular nerve to be stimulated; whether the nerve is branched; orientation of the induced electric field with respect to the nerve; type of tissue present between the electrodes and the nerve; etc.). In some embodiments, the electrodes158aand158bmay generate an electric field configured to penetrate intervening tissue111between the electrodes and one or more nerves. The intervening tissue111may include muscle tissue, bone, connective tissue, adipose tissue, organ tissue, or any combination thereof. For subjects suffering with obstructive sleep apnea, for instance, the intervening tissue may include the genioglossus muscle. The generation of electric fields configured to penetrate intervening tissue is now discussed with respect toFIGS.15a,15b,15c, and16. In response to a field inducing signal, implant electrodes158aand158bmay be configured to generate an electric field with field lines extending generally in the longitudinal direction of one or more nerves to be modulated. In some embodiments, implant electrodes158aand158bmay be spaced apart from one another along the longitudinal direction of a nerve to facilitate generation of such an electric field. The electric field may also be configured to extend in a direction substantially parallel to a longitudinal direction of at least some portion of the nerve to be modulated. For example, a substantially parallel field may include field lines that extend more in a longitudinal direction than a transverse direction compared to the nerve. Orienting the electric field in this way may facilitate electrical current flow through a nerve or tissue, thereby increasing the likelihood of eliciting an action potential to induce modulation. FIG.15aillustrates a pair of electrodes158a,158bspaced apart from one another along the longitudinal direction of nerve210to facilitate generation of an electric field having field lines220substantially parallel to the longitudinal direction of nerve210. InFIG.15a, modulation electrodes158a,158bare illustrated as line electrodes, although the generation of substantially parallel electric fields may be accomplished through the use of other types of electrodes, for example, a series of point electrodes. Utilizing an electric field having field lines220extending in a longitudinal direction of nerve210may serve to reduce the amount of energy required to achieve neural modulation. Naturally functioning neurons function by transmitting action potentials along their length. Structurally, neurons include multiple ion channels along their length that serve to maintain a voltage potential gradient across a plasma membrane between the interior and exterior of the neuron. Ion channels operate by maintaining an appropriate balance between positively charged sodium ions on one side of the plasma membrane and negatively charged potassium ions on the other side of the plasma membrane. A sufficiently high voltage potential difference created near an ion channel may exceed a membrane threshold potential of the ion channel. The ion channel may then be induced to activate, pumping the sodium and potassium ions across the plasma membrane to switch places in the vicinity of the activated ion channel. This, in turn, further alters the potential difference in the vicinity of the ion channel which may serve to activate a neighboring ion channel. The cascading activation of adjacent ion channels may serve to propagate an action potential along the length of the neuron. Further, the activation of an ion channel in an individual neuron may induce the activation of ion channels in neighboring neurons that, bundled together, form nerve tissue. The activation of a single ion channel in a single neuron, however, may not be sufficient to induce the cascading activation of neighboring ion channels necessary to permit the propagation of an action potential. Thus, the more ion channels in a locality that may be recruited by an initial potential difference, caused through natural means such as the action of nerve endings or through artificial means, such as the application of electric fields, the more likely the propagation of an action potential may be. The process of artificially inducing the propagation of action potentials along the length of a nerve may be referred to as stimulation, or up modulation. Neurons may also be prevented from functioning naturally through constant or substantially constant application of a voltage potential difference. After activation, each ion channel experiences a refractory period, during which it “resets” the sodium and potassium concentrations across the plasma membrane back to an initial state. Resetting the sodium and potassium concentrations causes the membrane threshold potential to return to an initial state. Until the ion channel restores an appropriate concentration of sodium and potassium across the plasma membrane, the membrane threshold potential will remain elevated, thus requiring a higher voltage potential to cause activation of the ion channel. If the membrane threshold potential is maintained at a high enough level, action potentials propagated by neighboring ion channels may not create a large enough voltage potential difference to surpass the membrane threshold potential and activate the ion channel. Thus, by maintaining a sufficient voltage potential difference in the vicinity of a particular ion channel, that ion channel may serve to block further signal transmission. The membrane threshold potential may also be raised without eliciting an initial activation of the ion channel. If an ion channel (or a plurality of ion channels) are subjected to an elevated voltage potential difference that is not high enough to surpass the membrane threshold potential, it may serve to raise the membrane threshold potential over time, thus having a similar effect to an ion channel that has not been permitted to properly restore ion concentrations. Thus, an ion channel may be recruited as a block without actually causing an initial action potential to propagate. This method may be valuable, for example, in pain management, where the propagation of pain signals is undesired. As described above with respect to stimulation, the larger the number of ion channels in a locality that may be recruited to serve as blocks, the more likely the chance that an action potential propagating along the length of the nerve will be blocked by the recruited ion channels, rather than traveling through neighboring, unblocked channels. The number of ion channels recruited by a voltage potential difference may be increased in at least two ways. First, more ion channels may be recruited by utilizing a larger voltage potential difference in a local area. Second, more ion channels may be recruited by expanding the area affected by the voltage potential difference. Returning toFIG.15a, it can be seen that, due to the electric field lines220running in a direction substantially parallel to the longitudinal direction of the nerve210, a large portion of nerve210may encounter the field. Thus, more ion channels from the neurons that make up nerve210may be recruited without using a larger voltage potential difference. In this way, modulation of nerve210may be achieved with a lower current and less power usage.FIG.15billustrates an embodiment wherein electrodes158aand158are still spaced apart from one another in a longitudinal direction of at least a portion of nerve210. A significant portion of nerve210remains inside of the electric field.FIG.15cillustrates a situation wherein electrodes158aand158bare spaced apart from one another in a transverse direction of nerve210. In this illustration, it can be seen that a significantly smaller portion of nerve210will be affected by electric field lines220. FIG.16illustrates potential effects of electrode configuration on the shape of generated electric field. The top row of electrode configurations, e.g. A, B, and C, illustrates the effects on the electric field shape when a distance between electrodes of a constant size is adjusted. The bottom row of electrode configurations, e.g. D, E, and F illustrates the effects on the electric field shape when the size of electrodes of constant distance is adjusted. In embodiments consistent with the present disclosure, modulation electrodes158a,158bmay be arranged on the surface of a muscle or other tissue, in order to modulate a nerve embedded within the muscle or other tissue. Thus, tissue may be interposed between modulation electrodes158a,158band a nerve to be modulated. Modulation electrodes158a,158bmay be spaced away from a nerve to be modulated. The structure and configuration of modulation electrodes158a,158bmay play an important role in determining whether modulation of a nerve, which is spaced a certain distance away from the electrodes, may be achieved. Electrode configurations A, B, and C show that when modulation electrodes158a,158bof a constant size are moved further apart, the depth of the electric field facilitated by the electrodes increases. The strength of the electric field for a given configuration may vary significantly depending on a location within the field. If a constant level of current is passed between modulation electrodes158aand158b, however, the larger field area of configuration C may exhibit a lower overall current density than the smaller field area of configuration A. A lower current density, in turn, implies a lower voltage potential difference between two points spaced equidistant from each other in the field facilitated by configuration C relative to that of the field facilitated by configuration A. Thus, while moving modulation electrodes158aand158bfarther from each other increases the depth of the field, it also decreases the strength of the field. In order to modulate a nerve spaced away from modulation electrodes158a,158b, a distance between the electrodes may be selected in order to facilitate an electric field of strength sufficient to surpass a membrane threshold potential of the nerve (and thereby modulate it) at the depth of the nerve. If modulation electrodes158a,158bare too close together, the electric field may not extend deep enough into the tissue in order to modulate a nerve located therein. If modulation electrodes158a,158bare too far apart, the electric field may be too weak to modulate the nerve at the appropriate depth. Appropriate distances between modulation electrodes158a,158b, may depend on an implant location and a nerve to be stimulated. For example, modulation point901is located at the same depth equidistant from the centers of modulation electrodes158a,158bin each of configurations A, B, and C. The figures illustrate that, in this example, configuration B is most likely to achieve the highest possible current density, and therefore voltage potential, at modulation point901. The field of configuration A may not extend deeply enough, and the field of configuration C may be too weak at that depth. In some embodiments, modulation electrodes158a,158bmay be spaced apart by about a distance of about 0.2 mm to 25 mm. In additional embodiments, modulation electrodes158a,158bmay be spaced apart by a distance of about 1 mm to 10 mm, or between 4 mm and 7 mm. In other embodiments modulation electrodes158a,158bmay be spaced apart by between approximately 6 mm and 7 mm. Electrode configurations D, E, and F show that when modulation electrodes158a,158bof a constant distance are changed in size, the shape of the electric field facilitated by the electrodes changes. If a constant level of current is passed between when modulation electrodes158aand158b, the smaller electrodes of configuration D may facilitate a deeper field than that of configurations E and F, although the effect is less significant relative to changes in distance between the electrodes. As noted above, the facilitated electric fields are not of uniform strength throughout, and thus the voltage potential at seemingly similar locations within each of the electric fields of configurations D, E, and, F may vary considerably. Appropriate sizes of modulation electrodes158a,158b, may therefore depend on an implant location and a nerve to be stimulated. In some embodiments modulation electrodes158a,158bmay have a surface area between approximately 0.01 mm2and 80 mm2. In additional embodiments, modulation electrodes158a,158bmay have a surface area between approximately 0.1 mm2and 4 mm2. In other embodiments modulation electrodes158a,158bmay have a surface area of between approximately 0.25 mm2and 0.35 mm2. In some embodiments, modulation electrodes158a,158bmay be arranged such that the electrodes are exposed on a single side of carrier161. In such an embodiment, an electric field is generated only on the side of carrier161with exposed electrical contacts. Such a configuration may serve to reduce the amount of energy required to achieve neural modulation because the entire electric field is generated on the same side of the carrier as the nerve, and little or no current is wasted traveling through tissue away from the nerve to be modulated. Such a configuration may also serve to make the modulation more selective. That is, by generating an electric field on the side of the carrier where there is a nerve to be modulated, nerves located in other areas of tissue (e.g. on the other side of the carrier from the nerve to be modulated), may avoid being accidentally modulated. As discussed above, the utilization of electric fields having electrical field lines extending in a direction substantially parallel to the longitudinal direction of a nerve to be modulated may serve to lower the power requirements of modulation. This reduction in power requirements may permit the modulation of a nerve using less than 1.6 mA of current, less than 1.4 mA of current, less than 1.2 mA of current, less than 1 mA of current, less than 0.8 mA of current, less than 0.6 mA of current, less than 0.4 mA of current and even less than 0.2 mA of current passed between modulation electrodes158a,158b. Reducing the current flow required nay have additional effects on the configuration of implant unit110and external unit120. For example, the reduced current requirement may enable implant unit110to modulate a nerve without a requirement for a power storage unit, such as a battery or capacitor, to be implanted in conjunction with implant unit110. For example, implant unit110may be capable of modulating a nerve using only the energy received via secondary antenna152. Implant unit110may be configured to serve as a pass through that directs substantially all received energy to modulation electrodes158aand158bfor nerve modulation. Substantially all received energy may refer to that portion of energy that is not dissipated or otherwise lost to the internal components of implant unit110. Finally, the reduction in required current may also serve to reduce the amount of energy required by external unit120. External unit120may be configured to operate successfully for an entire treatment session lasting from one to ten hours by utilizing a battery having a capacity of less than 240 mAh, less than 120 mAh, and even less than 60 mAh. As discussed above, utilization of parallel fields may enable implant unit110to modulate nerves in a non-contacting fashion. Contactless neuromodulation may increase the efficacy of an implanted implant unit110over time compared to modulation techniques requiring contact with a nerve or muscle to be modulated. Over time, implantable devices may migrate within the body. Thus, an implantable device requiring nerve contact to initiate neural modulation may lose efficacy as the device moves within the body and loses contact with the nerve to be modulated. In contrast, implant unit110, utilizing contactless modulation, may still effectively modulate a nerve even if it moves toward, away, or to another location relative to an initial implant location. Additionally, tissue growth and/or fibrosis may develop around an implantable device. This growth may serve to lessen or even eliminate the contact between a device designed for contact modulation and a nerve to be modulated. In contrast, implant unit110, utilizing contactless modulation, may continue to effectively modulate a nerve if additional tissue forms between it and a nerve to be modulated. Another feature enabled through the use of parallel fields is the ability to modulate nerves of extremely small diameter. As the diameter of a nerve decreases, the electrical resistance of the nerve increases, causing the voltage required to induce an action potential to rise. As described above, the utilization of parallel electric fields permits the application of larger voltage potentials across nerves. This, in turn, may permit the modulation of smaller diameter nerves, requiring larger voltage potentials to induce action potentials. Nerves typically have reduced diameters at their terminal fibers, e.g. the distal ends, as they extend away from the nerve trunk. Modulating these narrower terminal fibers may permit more selective modulation. Larger nerve trunks typically carry many nerve fibers that may innervate several different muscles, and so inducing modulation of a nerve trunk may cause to the modulation of unintended nerve fibers, and thus the innervation and contraction of unintended muscles. Selective modulation of terminal fibers may prevent such unintended muscle activity. In some embodiments, implant unit110may be configured to modulate nerves having diameters of less than 2 mm, less than 1 mm, less than 500 microns, less than 200 microns, less than 100 microns, less than 50 microns, and even less than 25 microns. Whether a field inducing signal constitutes a modulation signal (resulting in an electric field that may cause nerve modulation) or a sub-modulation signal (resulting in an electric field not intended to cause nerve modulation) may ultimately be controlled by processor144of external unit120. For example, in certain situations, processor144may determine that nerve modulation is appropriate. Under these conditions, processor144may cause signal source144and amplifier146to generate a modulation control signal on primary antenna150(i.e., a signal having a magnitude and/or duration selected such that a resulting secondary signal on secondary antenna152will provide a modulation signal at implant electrodes158aand158b). Processor144may be configured to limit an amount of energy transferred from external unit120to implant unit110. For example, in some embodiments, implant unit110may be associated with a threshold energy limit that may take into account multiple factors associated with the patient and/or the implant. For example, in some cases, certain nerves of a patient should receive no more than a predetermined maximum amount of energy to minimize the risk of damaging the nerves and/or surrounding tissue. Additionally, circuitry180of implant unit110may include components having a maximum operating voltage or power level that may contribute to a practical threshold energy limit of implant unit110. Processor144may be configured to account for such limitations when setting the magnitude and/or duration of a primary signal to be applied to primary antenna150. In addition to determining an upper limit of power that may be delivered to implant unit110, processor144may also determine a lower power threshold based, at least in part, on an efficacy of the delivered power. The lower power threshold may be computed based on a minimum amount of power that enables nerve modulation (e.g., signals having power levels above the lower power threshold may constitute modulation signals while signals having power levels below the lower power threshold may constitute sub-modulation signals). A lower power threshold may also be measured or provided in alternative ways. For example, appropriate circuitry or sensors in the implant unit110may measure a lower power threshold. A lower power threshold may be computed or sensed by an additional external device, and subsequently programmed into processor144, or programmed into implant unit110. Alternatively, implant unit110may be constructed with circuitry180specifically chosen to generate signals at the electrodes of at least the lower power threshold. In still another embodiment, an antenna of external unit120may be adjusted to accommodate or produce a signal corresponding to a specific lower power threshold. The lower power threshold may vary from patient to patient, and may take into account multiple factors, such as, for example, modulation characteristics of a particular patient's nerve fibers, a distance between implant unit110and external unit120after implantation, and the size and configuration of implant unit components (e.g., antenna and implant electrodes), etc. Processor144may also be configured to cause application of sub-modulation control signals to primary antenna150. Such sub-modulation control signals may include an amplitude and/or duration that result in a sub-modulation signal at electrodes158a,158b. While such sub-modulation control signals may not result in nerve modulation, such sub-modulation control signals may enable feedback-based control of the nerve modulation system. That is, in some embodiments, processor144may be configured to cause application of a sub-modulation control signal to primary antenna150. This signal may induce a secondary signal on secondary antenna152, which, in turn, induces a primary coupled signal component on primary antenna150. To analyze the primary coupled signal component induced on primary antenna150, external unit120may include a feedback circuit148(e.g., a signal analyzer or detector, etc.), which may be placed in direct or indirect communication with primary antenna150and processor144. Sub-modulation control signals may be applied to primary antenna150at any desired periodicity. In some embodiments, the sub-modulation control signals may be applied to primary antenna150at a rate of one every five seconds (or longer). In other embodiments, the sub-modulation control signals may be applied more frequently (e.g., once every two seconds, once per second, once per millisecond, once per nanosecond, or multiple times per second). Further, it should be noted that feedback may also be received upon application of modulation control signals to primary antenna150(i.e., those that result in nerve modulation), as such modulation control signals may also result in generation of a primary coupled signal component on primary antenna150. The primary coupled signal component may be fed to processor144by feedback circuit148and may be used as a basis for determining a degree of coupling between primary antenna150and secondary antenna152. The degree of coupling may enable determination of the efficacy of the energy transfer between two antennas. Processor144may also use the determined degree of coupling in regulating delivery of power to implant unit110. Processor144may be configured with any suitable logic for determining how to regulate power transfer to implant unit110based on the determined degree of coupling. For example, where the primary coupled signal component indicates that a degree of coupling has changed from a baseline coupling level, processor144may determine that secondary antenna152has moved with respect to primary antenna150(either in coaxial offset, lateral offset, or angular offset, or any combination). Such movement, for example, may be associated with a movement of the implant unit110, and the tissue that it is associated with based on its implant location. Thus, in such situations, processor144may determine that modulation of a nerve in the patient's body is appropriate. More particularly, in response to an indication of a change in coupling, processor144, in some embodiments, may cause application of a modulation control signal to primary antenna150in order to generate a modulation at implant electrodes158a,158b, e.g., to cause modulation of a nerve of the patient. In an embodiment for the treatment of a sleep breathing disorder, movement of an implant110may be associated with movement of the tongue, which may indicate snoring, the onset of a sleep apnea event or a sleep apnea precursor. Each of these conditions may require the stimulation of the genioglossus muscle of the patient to relieve or avert the event. Such stimulation may result in contraction of the muscle and movement of the patient's tongue away from the patient's airway. In embodiments for the treatment of head pain, including migraines, processor144may be configured to generate a modulation control signal based on a signal from a user, for example, or a detected level of neural activity in a sensory neuron (e.g. the greater occipital nerve or trigeminal nerve) associated with head pain. A modulation control signal generated by the processor and applied to the primary antenna150may generate a modulation signal at implant electrodes158a,158b, e.g., to cause inhibition or blocking of a sensory nerve of the patient. Such inhibition or blocking may decrease or eliminate the sensation of pain for the patient. In embodiments for the treatment of hypertension, processor144may be configured to generate a modulation control signal based on, for example, pre-programmed instructions and/or signals from an implant indicative of blood pressure. A modulation control signal generated by the processor and applied to the primary antenna150may generate a modulation signal at implant electrodes158a,158b, e.g., to cause either inhibition or stimulation of nerve of a patient, depending on the requirements. For example, a neuromodulator placed in a carotid artery or jugular artery (i.e. in the vicinity of a carotid baroreceptor), may receive a modulation control signal tailored to induce a stimulation signal at the electrodes, thereby causing the glossopharyngeal nerve associated with the carotid baroreceptors to fire at an increased rate in order to signal the brain to lower blood pressure. Similar modulation of the glossopharyngeal nerve may be achieved with a neuromodulator implanted in a subcutaneous location in a patient's neck or behind a patient's ear. A neuromodulator place in a renal artery may receive a modulation control signal tailored to cause an inhibiting or blocking signal at the electrodes, thereby inhibiting a signal to raise blood pressure carried from the renal nerves to the kidneys. Modulation control signals may include stimulation control signals, and sub-modulation control signals may include sub-stimulation control signals. Stimulation control signals may have any amplitude, pulse duration, or frequency combination that results in a stimulation signal at electrodes158a,158b. In some embodiments (e.g., at a frequency of between about 6.5-13.6 MHz), stimulation control signals may include a pulse duration of greater than about 50 microseconds and/or an amplitude of approximately 0.5 amps, or between 0.1 amps and 1 amp, or between 0.05 amps and 3 amps. Sub-stimulation control signals may have a pulse duration less than about 500, or less than about 200 nanoseconds and/or an amplitude less than about 1 amp, 0.5 amps, 0.1 amps, 0.05 amps, or 0.01 amps. Of course, these values are meant to provide a general reference only, as various combinations of values higher than or lower than the exemplary guidelines provided may or may not result in nerve stimulation. In some embodiments, stimulation control signals may include a pulse train, wherein each pulse includes a plurality of sub-pulses.FIG.17depicts the composition of an exemplary modulation pulse train. Such a pulse train1010may include a plurality of modulation pulses1020, wherein each modulation pulse1020may include a plurality of modulation sub-pulses1030.FIG.10is exemplary only, at a scale appropriate for illustration, and is not intended to encompass all of the various possible embodiments of a modulation pulse train, discussed in greater detail below. An alternating current signal (e.g., at a frequency of between about 6.5-13.6 MHz) may be used to generate a pulse train1010, as follows. A sub-pulse1030may have a pulse duration of between 50-250 microseconds, or a pulse duration of between 1 microsecond and 2 milliseconds, during which an alternating current signal is turned on. For example, a 200 microsecond sub-pulse1030of a 10 MHz alternating current signal will include approximately 2000 periods. Each modulation pulse1020may, in turn, have a pulse duration1040of between 100 and 500 milliseconds, during which sub-pulses1030occur at a frequency of between 25 and 100 Hz. Thus, a modulation pulse1020may include between about 2.5 and 50 modulation sub-pulses1030. In some embodiments, a modulation1020pulse may include between about 5 and 15 modulation sub-pulses1030. For example, a 200 millisecond modulation pulse1020of 50 Hz modulation sub-pulses1030will include approximately 10 modulation sub-pulses1030. Finally, in a modulation pulse train1010, each modulation pulse1020may be separated from the next by a temporal spacing1050of between 0.2 and 2 seconds. For example, in a pulse train1010of 200 millisecond pulse duration1040modulation pulses1020, each separated by a 1.3 second temporal spacing1050from the next, a new modulation pulse1020will occur every 1.5 seconds. The frequency of modulation pulses1020may also be timed to in accordance with physiological events of the subject. For example, modulation pulses1020may occur at a frequency chosen from bong any multiple of a breathing frequency, such as four, eight, or sixteen. In another example, modulation pulses1020may be temporally spaced so as not to permit a complete relaxation of a muscle after causing a muscular contraction. The pulse duration1040of modulation pulses1020and the temporal spacing1050between modulation pulses1020in a pulse train1010may be maintained for a majority of the modulation pulses1020, or may be varied over the course of a treatment session according to a subject's need. Such variations may also be implemented for the modulation sub-pulse duration and temporal spacing. Pulse train1010depicts a primary signal pulse train, as generated by external unit120. In some embodiments, the primary signal may result in a secondary signs on the secondary antenna152of implant unit110. This signal may be converted to a direct current signal for delivery to modulation electrodes158a,158b. In this situation, the generation of modulation sub-pulse1030may result in the generation and delivery of a square wave of a similar duration as modulation sub-pulse1030to modulation electrodes158a,158b. In an embodiment for the treatment of sleep disordered breathing, modulation pulses1020and modulation sub-pulses1030may include stimulation pulses and stimulation sub-pulses adapted to cause neural stimulation. A pulse train1010of this embodiment may be utilized, for example, to provide ongoing stimulation during a treatment session. Ongoing stimulation during a treatment session may include transmission of the pulse train for at least 70%, at least 80%, at least 90%, end at least 99% of the treatment session. In the context of sleep disordered breathing, a treatment session may be a period of time during which a subject is asleep and in need of treatment to prevent sleep disordered breathing. Such a treatment session may last anywhere from about three to ten hours. A treatment session may include as few as approximately 4,000 and as many as approximately 120,000 modulation pulses1020. In some embodiments, a pulse train1010may include at least 5,000, at least 10,000, and at least 100,000 modulation pulses1020. In the context of other conditions to which neural modulators of the present disclosure are applied, a treatment session may be of varying length according to the duration of the treated condition. Processor144may be configured to determine a degree of coupling between primary antenna150and secondary antenna152by monitoring one or more aspects of the primary coupled signal component received through feedback circuit148. In some embodiments, processor144may determine a degree of coupling between primary antenna150and secondary antenna152by monitoring a voltage level associated with the primary coupled signal component, a current level, or any other attribute that may depend on the degree of coupling between primary antenna150and secondary antenna152. For example, in response to periodic sub-modulation signals applied to primary antenna150, processor144may determine a baseline voltage level or current level associated with the primary coupled signal component. This baseline voltage level, for example, may be associated with a range of movement of the patient's tongue when a sleep apnea event or its precursor is not occurring, e.g. during normal breathing. As the patient's tongue moves toward a position associated with a sleep apnea event or its precursor, the coaxial, lateral, or angular offset between primary antenna150and secondary antenna152may change. As a result, the degree of coupling between primary antenna150and secondary antenna152may change, and the voltage level or current level of the primary coupled signal component on primary antenna150may also change. Processor144may be configured to recognize a sleep apnea event or its precursor when a voltage level, current level, or other electrical characteristic associated with the primary coupled signal component change by a predetermined amount or reaches a predetermined absolute value. FIG.18provides a graph that illustrates this principle in more detail. For a two-coil system where one coil receives a radio frequency (RF) drive signal, graph200plots a rate of change in induced current in the receiving coil as a function of coaxial distance between the coils. For various coil diameters and initial displacements, graph200illustrates the sensitivity of the induced current to further displacement between the coils, moving them either closer together or further apart. It also indicates that, overall, the induced current in the secondary coil will decrease as the secondary coil is moved away from the primary, drive coil, i.e. the rate of change of induced current, in mA/mm, is consistently negative. The sensitivity of the induced current to further displacement between the coils varies with distance. For example, at a separation distance of 10 mm, the rate of change in current as a function of additional displacement in a 14 mm coil is approximately −6 mA/mm. If the displacement of the coils is approximately 22 mm, the rate of change in the induced current in response to additional displacement is approximately −11 mA/mm, which corresponds to a local maximum in the rate of change of the induced current. Increasing the separation distance beyond 22 mm continues to result in a decline in the induced current in the secondary coil, but the rate of change decreases. For example, at a separation distance of about 30 mm, the 14 mm coil experiences a rate of change in the induced current in response to additional displacement of about −8 mA/mm. With this type of information, processor144may be able to determine a particular degree of coupling between primary antenna150and secondary antenna152, at any given time, by observing the magnitude and/or rate of change in the magnitude of the current associated with the primary coupled signal component on primary antenna150. Processor144may be configured to determine a degree of coupling between primary antenna150and secondary antenna152by monitoring other aspects of the primary coupled signal component. For example, in some embodiments, a residual signal, or an echo signal, may be monitored. As shown inFIG.14, circuitry180in implant unit110may include inductors, capacitors, and resistors, and thus may constitute an LRC circuit. As described in greater detail above, when external unit120transmits a modulation (or sub-modulation) control signal, a corresponding signal is developed on secondary antenna152. The signal developed on secondary antenna152causes current to flow in circuitry180of implant unit110, exciting the LRC circuit. When excited the LRC circuit may oscillate at its resonant frequency, related to the values of the L (inductance), R (resistance), and C (capacitance values in the circuit). When processor144discontinues generating the control signal, both the oscillating signal on primary antenna150and the oscillating signal on secondary antenna152may decay over a period of time as the current is dissipated. As the oscillating signal on the secondary antenna152decays, so too does the coupled feedback signal received by primary antenna150. Thus, the decaying signal in circuitry180of implant unit110may be monitored by processor144of external unit120. This monitoring may be further facilitated by configuring the circuitry170of external unit120to allow the control signal generated in primary antenna150to dissipate faster than the signal in the implant unit110. Monitoring the residual signal and comparing it to expect values of a residual signal may provide processor144with an indication of a degree of coupling between primary antenna150and secondary antenna152. Monitoring the decaying oscillating signal in the implant unit110may also provide processor144information about the performance of implant unit110. Processor144may be configured to compare the parameters of the control signal with the parameters of the detected decaying implant signal. For example, an amplitude of the decaying signal is proportional to the amount of energy remaining in implant unit110; by comparing an amount of energy transmitted in the control signal with an amount of energy remaining in the implant, processor144may determine a level of power consumption in the implant. Further, by comparing a level of power consumption in the implant to a detected amount of tongue movement, processor144may determine an efficacy level of transmitted modulation signals. Monitoring the residual, or echo signals, in implant unit110may permit the implementation of several different features. Thus, processor144may be able to determine information including power consumption in implant unit110, current delivery to the tissue by implant unit110, energy delivery to implant unit110, functionality of implant unit110, and other parameters determinable through residual signal analysis Processor144may be configured to monitor the residual implant signal in a diagnostic mode. For example, if processor144detects no residual signal in implant unit110after transmission of a control signal, it may determine that implant unit110is unable to receive any type of transmission, and is not functioning. In such a case, processor144may cause a response that includes an indication to a user that implant unit110is not functioning properly. Such an indication may be in the form of, e.g., an audible or visual alarm. In another potential malfunction, if processor144detects a residual signal in the implant that is higher than expected, it may determine that, while implant unit is receiving a transmitted control signal, the transmitted energy is not being transferred to the tissue by electrodes158a,158b, at an appropriate rate. Processor144may also be configured to implement a treatment protocol including the application of a desired target current level to be applied by the modulation electrodes (e.g., 1 mA). Even if the modulation control signal delivers a signal of constant amplitude, the delivered current may not remain stable. The coupled feedback signal detected by primary antenna150may be used as the basis for feedback control of the implant unit to ensure that the implant delivers a stable 1 mA current during each application of a modulation control signal. Processor144, by analyzing the residual signal in the implant, may determine an amount of current delivered during the application of a modulation control signal. Processor144may then increase or decrease the amplitude of the modulation control signal based on the determined information about the delivered current. Thus, the modulation control signal applied to primary antenna150may be adjusted until the observed amplitude of the echo signal indicates that the target current level has been achieved. In some embodiments, processor144may be configured to alter a treatment protocol based on detected efficacy during a therapy period. As described above, processor144may be configured, through residual signal analysis, to determine the amount of current, power, or energy delivered to the tissue through electrodes158a,158b. Processor144may be configured to correlate the detected amount of tongue movement as a result of a modulation control signal with the amount of power ultimately delivered to the tissue. Thus, rather than comparing the effects of signal transmission with the amount of power or energy transmitted (which processor144may also be configured to do), processor144may compare the effects of signal transmission with the amount of power delivered. By comparing modulating effects with power delivered, processor144may be able to more accurately optimize a modulation signal. The residual signal feedback methods discussed above may be applied to any of several other embodiments of the disclosure as appropriate. For example, information gathered through residual signal feedback analysis may be included in the information stored in memory unit143and transmitted to a relay or final destination via communications interface145of external unit120. In another example, the above described residual signal feedback analysis may be incorporated into methods detecting tongue movement and tongue vibration. In some embodiments, an initially detected coupling degree may establish a baseline range when the patient attaches external unit120to the skin. Presumably, while the patient is awake, the tongue is not blocking the patient's airway and moves with the patients breathing in a natural range, where coupling between primary antenna150and secondary antenna152may be within a baseline range. A baseline coupling range may encompass a maximum coupling between primary antenna150and secondary antenna152. A baseline coupling range may also encompass a range that does not include a maximum coupling level between primary antenna150and secondary antenna152. Thus, the initially determined coupling may be fairly representative of a non-sleep apnea condition and may be used by processor144as baseline in determining a degree of coupling between primary antenna150and secondary antenna152. As the patient wears external unit120, processor144may periodically scan over a range of primary signal amplitudes to determine current values of coupling. If a periodic scan results in determination of a degree of coupling different from the baseline coupling, processor144may determine that there has been a change from the baseline initial conditions. By periodically determining a degree of coupling value, processor144may be configured to determine, in situ, appropriate parameter values for the modulation control signal that will ultimately result in nerve modulation. For example by determining the degree of coupling between primary antenna150and secondary antenna152, processor144may be configured to select characteristics of the modulation control signal (e.g., amplitude, pulse duration, frequency, etc.) that may provide a modulation signal at electrodes158a,158bin proportion to or otherwise related to the determined degree of coupling. In some embodiments, processor144may access a lookup table or other data stored in a memory correlating modulation control signal parameter values with degree of coupling. In this way, processor144may adjust the applied modulation control signal in response to an observed degree of coupling. Additionally or alternatively, processor144may be configured to determine the degree of coupling between primary antenna150and secondary antenna152during modulation. The tongue, or other structure on or near which the implant is located, and thus implant unit110, may move as a result of modulation. Thus, the degree of coupling may change during modulation. Processor144may be configured to determine the degree of coupling as it changes during modulation, in order to dynamically adjust characteristics of the modulation control signal according to the changing degree of coupling. This adjustment may permit processor144to cause implant unit110to provide an appropriate modulation signal at electrodes158a,158bthroughout a modulation event. For example, processor144may alter the primary signal in accordance with the changing degree of coupling in order to maintain a constant modulation signal, or to cause the modulation signal to be reduced in a controlled manner according to patient needs. More particularly, the response of processor144may be correlated to the determined degree of coupling. In situations where processor144determines that the degree of coupling between primary antenna150and secondary antenna has fallen only slightly below a predetermined coupling threshold (e.g., during snoring or during a small vibration of the tongue or other sleep apnea event precursor), processor144may determine that only a small response is necessary. Thus, processor144may select modulation control signal parameters that will result in a relatively small response (e.g., a short stimulation of a nerve, small muscle contraction, etc.). Where, however, processor144determines that the degree of coupling has fallen substantially below the predetermined coupling threshold (e.g., where the tongue has moved enough to cause a sleep apnea event), processor144may determine that a larger response is required. As a result, processor144may select modulation control signal parameters that will result in a larger response. In some embodiments, only enough power may be transmitted to implant unit110to cause the desired level of response. In other words, processor144may be configured to cause a metered response based on the determined degree of coupling between primary antenna150and secondary antenna152. As the determined degree of coupling decreases, processor144may cause transfer of power in increasing amounts. Such an approach may preserve battery life in the external unit120, may protect circuitry170and circuitry180, may increase effectiveness in addressing the type of detected condition (e.g., sleep apnea, snoring, tongue movement, etc.), and may be more comfortable for the patient. In some embodiments, processor144may employ an iterative process in order to select modulation control signal parameters that result in a desired response level. For example, upon determining that a modulation control signal should be generated, processor144may cause generation of an initial modulation control signal based on a set of predetermined parameter values. If feedback from feedback circuit148indicates that a nerve has been modulated (e.g., if an increase in a degree of coupling is observed), then processor144may return to a monitoring mode by issuing sub-modulation control signals. If, on the other hand, the feedback suggests that the intended nerve modulation did not occur as a result of the intended modulation control signal or that modulation of the nerve occurred but only partially provided the desired result (e.g., movement of the tongue only partially away from the airway), processor144may change one or more parameter values associated with the modulation control signal (e.g., the amplitude, pulse duration, etc.). Where no nerve modulation occurred, processor144may increase one or more parameters of the modulation control signal periodically until the feedback indicates that nerve modulation has occurred. Where nerve modulation occurred, but did not produce the desired result, processor144may re-evaluate the degree of coupling between primary antenna150and secondary antenna152and select new parameters for the modulation control signal targeted toward achieving a desired result. For example, where stimulation of a nerve causes the tongue to move only partially away from the patient's airway, additional stimulation may be desired. Because the tongue has moved away from the airway, however, implant unit110may be closer to external unit120and, therefore, the degree of coupling may have increased. As a result, to move the tongue a remaining distance to a desired location may require transfer to implant unit110of a smaller amount of power than what was supplied prior to the last stimulation-induced movement of the tongue. Thus, based on a newly determined degree of coupling, processor144can select new parameters for the stimulation control signal aimed at moving the tongue the remaining distance to the desired location. In one mode of operation, processor144may be configured to sweep over a range of parameter values until nerve modulation is achieved. For example, in circumstances where an applied sub-modulation control signal results in feedback indicating that nerve modulation is appropriate, processor144may use the last applied sub-modulation control signal as a starting point for generation of the modulation control signal. The amplitude and/or pulse duration (or other parameters) associated with the signal applied to primary antenna150may be iteratively increased by predetermined amounts and at a predetermined rate until the feedback indicates that nerve modulation has occurred. Processor144may be configured to determine or derive various physiologic data based on the determined degree of coupling between primary antenna150and secondary antenna152. For example, in some embodiments the degree of coupling may indicate a distance between external unit120and implant unit110, which processor144may use to determine a position of external unit120or a relative position of a patient's tongue. Monitoring the degree of coupling can also provide such physiologic data as whether a patient's tongue is moving or vibrating (e.g., whether the patient is snoring), by how much the tongue is moving or vibrating, the direction of motion of the tongue, the rate of motion of the tongue, etc. In response to any of these determined physiologic data, processor144may regulate delivery of power to implant unit110based on the determined physiologic data. For example, processor144may select parameters for a particular modulation control signal or series of modulation control signals for addressing a specific condition relating to the determined physiologic data. If the physiologic data indicates that the tongue is vibrating, for example, processor144may determine that a sleep apnea event is likely to occur and may issue a response by delivering power to implant unit110in an amount selected to address the particular situation. If the tongue is in a position blocking the patient's airway (or partially blocking a patient's airway), but the physiologic data indicates that the tongue is moving away from the airway, processor144may opt to not deliver power and wait to determine if the tongue clears on its own. Alternatively, processor144may deliver a small amount of power to implant unit110(e.g., especially where a determined rate of movement indicates that the tongue is moving slowly away from the patient's airway) to encourage the tongue to continue moving away from the patient's airway or to speed its progression away from the airway. In an embodiment for the treatment of snoring, processor144may be configured to determine when a subject is snoring based on a feedback signal that varies based on a breathing pattern of the subject. The feedback signal, may include, for example, the signal induced in the primary antenna as a result of a sub-modulating signal transmitted to the secondary antenna. In an embodiment for determining whether a subject is snoring, in addition to a tongue location, tongue movement may be detected through a degree of coupling. Tongue movement, which may include tongue velocity, tongue displacement, and tongue vibration, may be indicative of snoring. Processor144may be configured to detect a tongue movement pattern and compare the detected movement pattern to known patterns indicative of snoring. For example, when a patient snores, the tongue may vibrate in a range between 60-100 Hz, such vibration may be detected by monitoring the coupling signal for a signal at a similar frequency. Such changes in the coupling signal may be relatively small compared to changes associated with larger movements of the tongue. Thus, snoring detection methods may be optimized to identify low amplitude signals. A low amplitude between 60-100 Hz may thus constitute a tongue movement pattern indicative of snoring. Additional patterns may also be detected. Another exemplary feedback signal may include a signal obtained by external unit120about a snoring condition. For example, audio sensors, microphones, and/or piezoelectric devices may be incorporated into external unit120to gather data about a potential snoring condition. Such sensors may detect sound vibrations traveling through the air and may detect vibrations of the subject's body near the location of the external unit's contact with the skin. In still another embodiment, the feedback signal may be provided by a thermistor, or other temperature measuring device, positioned so as to measure a temperature in the airway. In yet another embodiment, a feedback signal that varies based upon a breathing pattern of the subject may be provided by electromyography electrodes. Electromyography electrodes may detect electrical activity in muscles. Interpretation of this electrical activity may provide information about muscular contraction and muscle tone. During normal breathing, subjects typically exhibit a pattern of muscular contractions that may be associated with the normal breathing, as muscles from the face, chin, neck, ribs, and diaphragm experience contractions in sequence. Electromyography electrodes may be used to measure both the strength and the pattern of muscular contractions during breathing. In still another embodiment, an accelerometer located on, or otherwise associated with external unit120may be utilized as the feedback signal to detect snoring. Located on the neck, ribs, or diaphragm, an accelerometer, by measuring external body movements, may detect a subject's breathing patterns. The accelerometer-detected breathing patterns may be analyzed to detect deviations from a normal breathing pattern, such as breathing patterns indicating heightened or otherwise altered effort. In additional embodiments, multiple feedback signals may be utilized to detect snoring in various combinations. For example, processor144may be configure such that, when a tongue movement pattern indicative of snoring is detected, sensors incorporated into external unit120are then monitored for confirmation that a snoring condition is occurring. In another example, processor144may be configured to utilize sensors in external unit120and/or an airway temperature measuring device to detect the presence of snoring, and then to detect and record the tongue movement pattern associated with the snoring. In this way, processor144may be configured to learn a tongue movement pattern associated with snoring individual to a particular user. Snoring may be correlated with heightened or otherwise altered breathing effort. Any or all of the previously described feedback methods may be used to determine or detect a heightened or otherwise altered breathing effort. Detection of such heightened or otherwise altered breathing effort may be used by processor144to determine that snoring is occurring. If snoring is detected, processor144may be configured to cause a hypoglossal nerve modulation control signal to be applied to the primary antenna in order to wirelessly transmit the hypoglossal nerve modulation control signal to the secondary antenna of implant unit110. Thus, in response to a detection of snoring, the processor may cause the hypoglossal nerve to be modulated. Hypoglossal nerve modulation may cause a muscular contraction of the genioglossus muscle, which may in turn alleviate the snoring condition. The scenarios described are exemplary only. Processor144may be configured with software and/or logic enabling it to address a variety of different physiologic scenarios with particularity. In each case, processor144may be configured to use the physiologic data to determine an amount of power to be delivered to implant unit110in order to modulate nerves associated with the tongue with the appropriate amount of energy. The disclosed embodiments may be used in conjunction with a method for regulating delivery of power to an implant unit. The method may include determining a degree of coupling between primary antenna150associated with external unit120and secondary antenna152associated with implant unit110, implanted in the body of a patient. Determining the degree of coupling may be accomplished by processor144located external to implant unit110and that may be associated with external unit120. Processor144may be configured to regulate delivery of power from the external unit to the implant unit based on the determined degree of coupling. As previously discussed, the degree of coupling determination may enable the processor to further determine a location of the implant unit. The motion of the implant unit may correspond to motion of the body part where the implant unit may be attached. This may be considered physiologic data received by the processor. The processor may, accordingly, be configured to regulate delivery of power from the power source to the implant unit based on the physiologic data. In alternative embodiments, the degree of coupling determination may enable the processor to determine information pertaining to a condition of the implant unit. Such a condition may include location as well as information pertaining to an internal state of the implant unit. The processor may, according to the condition of the implant unit, be configured to regulate delivery of power from the power source to the implant unit based on the condition data. In some embodiments, implant unit110may include a processor located on the implant. A processor located on implant unit110may perform all or some of the processes described with respect to the at least one processor associated with an external unit. For example, a processor associated with implant unit110may be configured to receive a control signal prompting the implant controller to turn on and cause a modulation signal to be applied to the implant electrodes for modulating a nerve. Such a processor may also be configured to monitor various sensors associated with the implant unit and to transmit this information back to and external unit. Power for the processor unit may be supplied by an onboard power source or received via transmissions from an external unit. In other embodiments, implant unit110may be self-sufficient, including its own power source and a processor configured to operate the implant unit110with no external interaction. For example, with a suitable power source, the processor of implant unit110could be configured to monitor conditions in the body of a subject (via one or more sensors or other means), determining when those conditions warrant modulation of a nerve, and generate a signal to the electrodes to modulate a nerve. The power source could be regenerative based on movement or biological function; or the power sources could be periodically rechargeable from an external location, such as, for example, through induction. FIG.19illustrates an exemplary implantation location for implant unit110.FIG.19depicts an implantation location in the vicinity of a genioglossus muscle1060that may be accessed through derma on an underside of a subject's chin.FIG.19depicts hypoglossal nerve (i.e. cranial nerve XII). The hypoglossal nerve1051, through lateral branch1053and medial branch1052, innervates the muscles of the tongue and other glossal muscles, including the genioglossus1060, the hyoglossus,1062, myelohyoid (not shown) and the geniohyoid1061muscles. The myelohyoid muscle, not pictured inFIG.19, forms the floor of the oral cavity, and wraps around the sides of the genioglossus muscle1060. The horizontal compartment of the genioglossus1060is mainly innervated by the medial terminal fibers1054of the medial branch1052, which diverges from the lateral branch1053at terminal bifurcation1055. The distal portion of medial branch1052then variegates into the medial terminal fibers1054. Contraction of the horizontal compartment of the genioglossus muscle1060may serve to open or maintain a subject's airway. Contraction of other glossal muscles may assist in other functions, such as swallowing, articulation, and opening or closing the airway. Because the hypoglossal nerve1051innervates several glossal muscles, it may be advantageous, for OSA treatment, to confine modulation of the hypoglossal nerve1051to the medial branch1052or even the medial terminal fibers1054of the hypoglossal nerve1051. In this way, the genioglossus muscle, most responsible for tongue movement and airway maintenance, may be selectively targeted for contraction inducing neuromodulation. Alternatively, the horizontal compartment of the genioglossus muscle may be selectively targeted. The medial terminal fibers1054may, however, be difficult to affect with neuromodulation, as they are located within the fibers of the genioglossus muscle1061. Embodiments of the present invention facilitate modulation the medial terminal fibers1054, as discussed further below. In some embodiments, implant unit110, including at least one pair of modulation electrodes, e.g. electrodes158a,158b, and at least one circuit may be configured for implantation through derma (i.e. skin) on an underside of a subject's chin. When implanted through derma on an underside of a subject's chin, an implant unit110may be located proximate to medial terminal fibers1054of the medial branch1052of a subject's hypoglossal nerve1051. An exemplary implant location1070is depicted inFIG.19. In some embodiments, implant unit110may be configured such that the electrodes158a,158bcause modulation of at least a portion of the subject's hypoglossal nerve through application of an electric field to a section of the hypoglossal nerve1051distal of a terminal bifurcation1055to lateral and medial branches1053,1052of the hypoglossal nerve1051. In additional or alternative embodiments, implant unit110may be located such that an electric field extending from the modulation electrodes158a,158bcan modulate one or more of the medial terminal fibers1054of the medial branch1052of the hypoglossal nerve1051. Thus, the medial branch1053or the medial terminal fibers1054may be modulated so as to cause a contraction of the genioglossus muscle1060, which may be sufficient to either open or maintain a patient's airway. When implant unit110is located proximate to the medial terminal fibers1054, the electric field may be configured so as to cause substantially no modulation of the lateral branch of the subject's hypoglossal nerve1051. This may have the advantage of providing selective modulation targeting of the genioglossus muscle1060. As noted above, it may be difficult to modulate the medial terminal fibers1054of the hypoglossal nerve1051because of their location within the genioglossus muscle1060. Implant unit110may be configured for location on a surface of the genioglossus muscle1060. Electrodes158a,158b, of implant unit110may be configured to generate a parallel electric field1090, sufficient to cause modulation of the medial terminal branches1054even when electrodes158a,158bare not in contact with the fibers of the nerve. That is, the anodes and the cathodes of the implant may be configured such that, when energized via a circuit associated with the implant110and electrodes158a,158b, the electric field1090extending between electrodes158a,158bmay be in the form of a series of substantially parallel arcs extending through and into the muscle tissue on which the implant is located. A pair of parallel line electrodes or two series of circular electrodes may be suitable configurations for producing the appropriate parallel electric field lines. Thus, when suitably implanted, the electrodes of implant unit110may modulate a nerve in a contactless fashion, through the generation of parallel electric field lines. Furthermore, the efficacy of modulation may be increased by an electrode configuration suitable for generating parallel electric field lines that run partially or substantially parallel to nerve fibers to be modulated. In some embodiments, the current induced by parallel electric field lines may have a greater modulation effect on a nerve fiber if the electric field lines1090and the nerve fibers to be modulated are partially or substantially parallel. The inset illustration ofFIG.19depicts electrodes158aand158bgenerating electric field lines1090(shown as dashed lines) substantially parallel to medial terminal fibers1054. In order to facilitate the modulation of the medial terminal fibers1054, implant unit110may be designed or configured to ensure the appropriate location of electrodes when implanted. An exemplary implantation is depicted inFIG.20. For example, a flexible carrier161of the implant may be configured such that at least a portion of a flexible carrier161of the implant is located at a position between the genioglossus muscle1060and the geniohyoid muscle1061. Flexible carrier161may be further configured to permit at least one pair of electrodes arranged on flexible carrier161to lie between the genioglossus muscle1060and the myelohyoid muscle. Either or both of the extensions162aand162bof elongate arm161may be configured adapt to a contour of the genioglossus muscle. Either or both of the extensions162aand162bof elongate arm161may be configured to extend away from the underside of the subject's chin along a contour of the genioglossus muscle1060. Either or both of extension arms162a,162bmay be configured to wrap around the genioglossus muscle when an antenna152is located between the genioglossus1060and geniohyoid muscle1061. In such a configuration, antenna152may be located in a plane substantially parallel with a plane defined by the underside of a subject's chin, as shown inFIG.20. Flexible carrier161may be configured such that the at least one pair of spaced-apart electrodes can be located in a space between the subject's genioglossus muscle and an adjacent muscle. Flexible carrier161may be configured such that at least one pair of modulation electrodes158a,158bis configured for implantation adjacent to a horizontal compartment1065of the genioglossus muscle1060. The horizontal compartment1065of the genioglossus1060is depicted inFIG.20and is the portion of the muscle in which the muscle fibers run in a substantially horizontal, rather than vertical, oblique, or transverse direction. At this location, the hypoglossal nerve fibers run between and in parallel to the genioglossus muscle fibers. In such a location, implant unit110may be configured such that the modulation electrodes generate an electric field substantially parallel to the direction of the muscle fibers, and thus, the medial terminal fibers1054of the hypoglossal nerve in the horizontal compartment. As described above implant unit110may include electrodes158a,158bon both extensions162a,162b, of extension arm162. In such a configuration, implant unit110may be configured for bilateral hypoglossal nerve stimulation. The above discussion has focused on a single hypoglossal nerve1051. The body contains a pair of hypoglossal nerves1051, on the left and right sides, each innervating muscles on its side. When a single hypoglossal nerve1051is modulated, it may cause stronger muscular contractions on the side of the body with which the modulated hypoglossal nerve is associated. This may result in asymmetrical movement of the tongue. When configured for bilateral stimulation, implant unit110may be able to stimulate both a left and a right hypoglossal nerve1051, causing more symmetric movement of the tongue and more symmetric airway dilation. As illustrated inFIGS.11aand11b, flexible carrier161may be sized and shaped for implantation in a vicinity of a hypoglossal nerve to be modulated such that the first pair of modulation electrodes is located to modulate a first hypoglossal nerve on a first side of the subject and the second pair of modulation electrodes is located to modulate a second hypoglossal nerve on a second side of the subject. Bilateral stimulation protocols may include various sequences of modulation. For example, both pairs of modulation electrodes may be activated together to provide a stronger muscular response in the subject. In another example, the modulation electrodes may be activated in an alternating sequence, first one, and then the other. Such a sequence may reduce muscle or neuronal fatigue during a therapy period, and may reduce the diminishment of sensitivity that can occur in a neuron subject to a constant modulation signal. In still another example, the modulation electrodes may be activated in an alternating sequence that includes polarity reversals of the electric field. In such an embodiment, one pair of electrodes may be activated with a neuromuscular modulating electric field having a polarity configured to cause a muscular contraction, while the other pair of electrodes may be activated with a field having a reversed polarity. By alternating the polarity, it may be possible to reduce short term neuronal fatigue and possible to minimize or eliminate long term neuronal damage. In some configurations, extensions162aand162bmay act as elongated arms extending from a central portion of flexible carrier161of implant unit110. The elongated arms may be configured to form an open ended curvature around a muscle, with a nerve to be stimulated, e.g. a hypoglossal nerve, located within the curvature formed by the elongated arms. Such a configuration may also include a stiffening portion located on or within flexible carrier161. Such a stiffening portion may comprise a material that is stiffer than a material of flexible carrier161. The stiffening portion may be preformed in a shape to better accommodate conforming flexible carrier161to a muscle of the subject—such as a genioglossus muscle. The stiffening portion may also be capable of plastic deformation, so as to permit a surgeon to modify the curvature of the flexible carrier161prior to implantation. The diameter of the curvature of the elongated arms may be significantly larger than the diameter of the nerve to be stimulated, for example, 2, 5, 10, 20, or more times larger. In some embodiments, a plurality of nerves to be stimulated, for example a left hypoglossal nerve and a right hypoglossal nerve, may be located within the arc of curvature formed by the elongated arms. Other embodiments of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the present disclosure. While this disclosure provides examples of the neuromodulation devices employed for the treatment of certain conditions, usage of the disclosed neuromodulation devices is not limited to the disclosed examples. The disclosure of uses of embodiments of the invention for neuromodulation are to be considered exemplary only. In its broadest sense, the invention may be used in connection with the treatment of any physiological condition through neuromodulation. Alternative embodiments will become apparent to those skilled in the art to which the present invention pertains without departing from its spirit and scope. Accordingly, the scope of the present invention is defined by the appended claims rather than the foregoing description.
157,847
11857792
DETAILED DESCRIPTION Systems, devices, and methods discussed herein can be configured for electrical stimulation of cranial nerves. Examples discussed herein can include methods for implanting a neuromodulation system or methods for using an implanted system to deliver neuromodulation therapy to one or more target cranial nerves, or to sense physiologic information about a patient, such as to monitor a disease state or control a neuromodulation therapy or other therapy. In an example, system or device features discussed herein can facilitate implantation of devices, leads, sensors, electrostimulation hardware, or other therapeutic means on or near cranial nerve tissue. In an example, the present subject matter includes systems and methods for implanting a neuromodulation device near or below an inferior border of a mandible (i.e., the body or ramus of the mandible or jaw bone) in an anterior triangle of the neck (e.g., located in the medial aspect), or in a posterior triangle of the neck (e.g., located in the lateral aspect), or in multiple regions of the neck. The present inventors have recognized that a problem to be solved can include providing a minimally invasive neuromodulation therapy or treatment system that can provide signals to neural targets in or near a cervical region of a patient. The problem can include treating, among other things, obstructive sleep apnea (OSA), heart failure, hypertension, epilepsy, depression, post-traumatic stress disorder (PTSD), attention deficit hyperactivity disorder (ADHD), craniofacial pain syndrome, facial palsy, migraine headaches, xerostomia, atrial fibrillation, stroke, autism, inflammatory bowel disease, chronic inflammation, chronic pain, tinnitus, rheumatoid arthritis, or fibromyalgia. The problem can include providing an implantable system that is resistant to migration or dislocation when the system is installed in a motion-prone body region such as in a neck or cervical region of a patient. The problem can further include stimulating multiple different cranial nerve targets concurrently or in a coordinated manner to provide an effective therapy. The present inventors have recognized, among other things, that a solution to the above-described problems can include a neuromodulation system that can be implanted in an anterior cervical region of a patient, such as at or under a mandible of the patient. In an example, the system can include a housing that can be coupled to tissue in or near an anterior triangle, such as to digastric muscle or tendon tissue, to mylohyoid muscle tissue, to a hyoid bone, or to a mandible, among other locations. The present inventors have recognized that the solution can include a device configured for wireless communication with an external power source or programmer, for example, with a communication device implanted at or near the housing in the anterior cervical region of the patient. The present inventors have recognized that the solution can include an implantable device with multiple electrode leads, such as can extend from a housing in multiple different directions, to interface with multiple different cranial nerves. The present inventors have recognized that the solution can further include or use physiologic information, such as can be sensed from a patient using implanted or external sensors or patient inputs, to update one or more characteristics of a therapy provided to the patient by the neuromodulation system. The present inventors have recognized that the neuromodulation systems and methods discussed herein can be used to treat OSA, among other disorders or diseases. In an example, an OSA treatment can use a neuromodulation device that is implanted in one or more of a submental triangle and a submandibular triangle, and an electrode lead with electrodes that are configured to be disposed at or near one or more targets on a hypoglossal nerve, vagus nerve, glossopharyngeal nerve, or trigeminal nerve (e.g., at a mandibular branch of the trigeminal nerve). In an example, the solution can include using multiple electrodes or electrode leads to deliver a coordinated, bilateral stimulation therapy to cranial nerve targets, such as to anterior and posterior branches of the hypoglossal nerve. The therapy can be configured to selectively stimulate or block a neural pathway that influences activity of one or more of tongue muscles, mylohyoid muscles, stylohyoid muscles, digastric muscles, or stylopharyngeus muscles of a patient, to thereby treat OSA. The description that follows describes systems, methods, techniques, instruction sequences, and computing machine program products that illustrate example embodiments of the present subject matter. In the following description, for purposes of explanation, numerous specific examples and aspects are set forth in order to provide an understanding of various embodiments of the present subject matter. It will be evident, however, to those skilled in the art, that embodiments of the present subject matter may be practiced in various combinations. Unless explicitly stated otherwise, structures (e.g., structural components, such as modules or functional blocks) are optional and may be combined or subdivided, and operations (e.g., in a procedure, algorithm, treatment, therapy, or other function) can vary in sequence or can be combined or divided. In an example, the implantable neuromodulation systems and devices discussed herein can comprise a control system, signal or pulse generator, or other therapy signal generator, such as can be disposed in one or more housings that can be communicatively coupled to share power and/or data. The housings can comprise one or more hermetic enclosures to protect the circuitry or other components therein. In an example, a housing can include one or more headers, such as can comprise a rigid or flexible interface for connecting the housing, or circuitry or components inside of the housing, with leads or other devices or components outside of the housing. In an example, a header can be used to couple signal generator circuitry inside the housing with electrodes or sensors outside of the housing. In an example, the header can be used to couple circuitry inside the housing with a telemetry antenna, wireless power communication devices (e.g., coils configured for near-field communications or NFC), or other devices, such as can be disposed on or comprise flexible substrates or flexible circuits. This system configuration allows the housing(s), lead(s), and flexible circuits to be implanted in different anatomic locations, such as in a neck or cervical region of a patient. In an example, the various system components can be implanted in one or more of the anatomic triangular regions or spaces in the cervical region, and leads or other devices external to a circuitry housing can be tunneled to other locations, including at various cranial nerve targets. Accordingly, various therapeutic elements can be implanted on or near target cranial nerves, and sensing elements can be implanted on or near the same or other cranial nerves or at other anatomic structures in the same or different locations. Some components can be located in a different anatomic location, such as in a different cervical region than is occupied by a housing. For example, a telemetry antenna or NFC coil can be provided at or near a surface of the skin, while a housing with circuitry that coordinates neuromodulation therapy or power signal management can be implanted elsewhere, such as more deeply within one of the anterior triangle spaces of the neck. In an example, multiple different housings can comprise a neuromodulation system, and the different housings can contain different control circuitry, power sources, sensors, or other components. The different housings and components therein can be tethered or connected, such as wirelessly or using leads or other flexible circuitry, such as in a serially-connected, daisy-chain configuration or in a star-like configuration. Such system configurations can facilitate implant of one portion of the system in one cervical region, while targeting therapy to a nerve target or sensing physiologic status information or patient activity level or posture from a different region. In an example, a system that is distributed across multiple different areas can help provide flexibility and strain relief from repetitive motion. The various housings for a cervically-implanted neuromodulation system can have various sizes, shapes, and features. For example, a housing can include surface contours that can correspond, generally, to contours of a triangular (e.g., in one or more dimensions) cervical region in a patient body. For example, some cervical spaces can include one or more three-dimensional regions or pockets, such as can be represented or defined in part by one or more generally triangular or pyramidal spaces, such as can narrow anteriorly and medially. Accordingly, an implantable device housing can have an oblique or truncated prism shape, such as at or along at least one of its faces, to facilitate positioning in such a pocket or space. In an example, the housing can have a generally cylindrical, prismatic, pyramidal, frustum, or spherical configuration, such as can include prismatic variations with or without parallel sides. For example, a housing configured to be implanted in anterior regions of the neck, such as the submandibular triangle or submental triangle, can have a housing shaped as a rectangular prism with wide sides parallel to a base of the mandible to minimize thickness, lessen patient discomfort, and avoid the submandibular gland. In this example, a lead or leads can extend from a header of the housing to one or more cranial nerves, such as the hypoglossal nerve in the submandibular triangle. Other cranial nerves and implantation sites can similarly be used, such as using similarly or differently shaped housings. The following discussion introduces various anatomic structures, including various triangle regions in a cervical or neck region. Following introduction of the anatomy, the discussion introduces various devices and features thereof that can be configured to provide neuromodulation to cranial nerve targets, among other targets, such as to treat various disorders, diseases, or symptoms. FIG.1illustrates generally a first anatomic example100of a front view of an anterior cervical region of a human. The region generally extends between a clavicle108and mandible116and can be divided into various additional regions or subregions. In an example, the anterior cervical region includes a pair of anterior triangles on opposite sides of a sagittal midline102, such as including an anterior triangle104as illustrated. The term “midline” as used herein refers to a line or plane of bilateral symmetry in the cervical or neck region of a person. In an example, a midline corresponds to the sagittal plane, that is, is the anteroposterior (AP) plane of the body. The anterior triangle104can include a region that is bounded by the midline102, a base of the mandible116, and a sternocleidomastoid muscle, or SCM106. A hyoid bone110can extend between the pair of anterior triangles across the midline102. The anterior triangle104can include, among other things, a digastric muscle112(e.g., including anterior and posterior portions of the digastric muscle112), a mylohyoid muscle114, and various other muscle, bone, nerve, and other body tissue. FIG.2illustrates generally a second anatomic example200that includes a portion of the anterior triangle104from the example ofFIG.1.FIG.2shows, for example, that the anterior triangle104can be divided into various regions, including a submandibular triangle206, and a submental triangle202. In an example, the anterior triangle104can further include a carotid triangle, as discussed below in the example ofFIG.3. A posterior triangle of the neck (not shown) can be divided into various regions, including an occipital triangle and a supraclavicular triangle. The submental triangle202is generally understood to include a region that is bounded by the midline102, the hyoid bone110, and the anterior digastric muscle204. The submandibular triangle206is generally understood to include a region that is bounded by the anterior digastric muscle204, the posterior digastric muscle208, and the base of the mandible116. FIG.3illustrates generally a third anatomic example300that includes a partial side view of the anterior triangle104. The example ofFIG.3further illustrates the location of the submandibular triangle206, such as in relation to the anterior digastric muscle204and the mandible116. The example ofFIG.3illustrates the carotid triangle302, such as can comprise a portion of the anterior triangle104in the cervical region. The carotid triangle302is generally understood to include a region that is bounded by the SCM106, the omohyoid muscle306, and the posterior digastric muscle208. In an example, an implantable neuromodulation device can be implanted in the anterior triangle104or in the posterior triangle, such as using the systems and methods discussed herein. In further examples, an implantable neuromodulation device can be implanted in one or more of the submental triangle202and the submandibular triangle206. The implantable neuromodulation device can be configured to provide a stimulation therapy to one or multiple nerve targets such as can be in or near the anterior triangle104or the posterior triangle, or to nerve targets that can be accessed via tunneled leads that extend from a housing disposed in the anterior triangle104or the posterior triangle. In other words, various regions in the anterior and posterior cervical triangles can provide access to a main body of, or to branches of, various cranial nerves, including the hypoglossal nerve (CN XII), the accessory nerve (CN XI), the vagus nerve (CN X), the glossopharyngeal nerve (CN IX), the facial nerve (CN VII), and the trigeminal nerve (CN V), among others. The present inventors have realized that the anterior and posterior cervical triangles are anatomic locations suitable for implantation of a neuromodulation system or component thereof. The present inventors have further realized that the locations include various anatomic structures suitable for coupling and therefore stabilizing a neuromodulation system or component thereof. For example, the present inventors have recognized that such coupling structures can include the hyoid bone110, the connective tissue sling of the hyoid bone110, the mandible116, the digastric tendon, the anterior or posterior portion of the digastric muscle112, the stylohyoid muscle304, the mylohyoid muscle114, the omohyoid muscle, or the SCM106. FIG.4illustrates generally a fourth anatomic example400that includes a partial side view that includes the anterior triangle104. The fourth anatomic example400illustrates an upper portion of the anterior triangle104and a portion of the upper neck, such as at or below a temporal bone424. A representation of a tongue406and of a portion of a jugular vein404is included for further context and reference. The fourth anatomic example400shows various nerves and vessels. The illustrated nerves include some, but not all, of the cranial nerves that can be targeted using the neuromodulation systems, devices, and methods discussed herein. For example, nerve targets in the fourth anatomic example400include a facial nerve402, a jugular vein404, a glossopharyngeal nerve412, a pharyngeal branch of vagus nerve414, a vagus nerve416, a hypoglossal nerve418, and a mandibular branch of the trigeminal nerve428, among others. The example ofFIG.4includes an example of an implantable therapy device426. The implantable therapy device426can be implanted in a patient in an upper portion of an anterior triangle104of a cervical region of the patient. For example, the implantable therapy device426can be implanted in one or more of the submental triangle202and the submandibular triangle206. In the example ofFIG.4, the implantable therapy device426can be coupled to various anatomical structures, such as a stylohyoid muscle410, a hyoid bone408, or other tendons or structures in the upper neck. The example ofFIG.4includes multiple leads coupled to the implantable therapy device426. For example, the implantable therapy device426can be coupled to a lower electrode lead420, an anterior electrode lead422, and an upper electrode lead430. The lower electrode lead420can be implanted at or near a neural target on the vagus nerve416, for example, in or adjacent to the carotid triangle302. In an example, the lower electrode lead420can be coupled to the SCM106or other structure at or near the vagus nerve416. The upper electrode lead430can be implanted at or near the facial nerve402, the mandibular branch of the trigeminal nerve428, or the glossopharyngeal nerve412, among others. In an example, the anterior electrode lead422can be implanted at or near a neural target on the hypoglossal nerve418. Various details of the implantable therapy device426and its associated leads are discussed herein, including in the example ofFIG.5. In an example, the various implantable devices and components thereof that are discussed herein can be coupled to various anatomic structures or tissues inside a patient body, such to stabilize or maintain a device or component at a particular location and resist device movement or migration as the patient carries out their daily activities. In an example, coupling a device or component to tissue can include anchoring, affixing, attaching, or otherwise securing the device or component to tissue using a coupling feature. A coupling feature can include, but is not limited to, a flap or flange, such as for suturing to tissue (e.g., muscle, tendon, cartilage, bone, or other tissue). In an example, a coupling feature can include various hardware such as a screw or helical member that can be driven into or attached to tissue or bone. In an example, a coupling feature can include a cuff, sleeve, adhesive, or other component. In an example, one or multiple different coupling features can be used for different portions of the same neuromodulation system. For example, a suture can be used to couple a device housing to a tissue site, and a lead, such as coupled to the housing, can include a distal cuff to secure the lead at or near a neural target. FIG.5illustrates generally an example of a system500that can be configured to provide or control a neuromodulation therapy. The system500can include an implantable system502and an external system520. The implantable system502and the external system520can be communicatively coupled using a wireless coupling528. In an example, the wireless coupling528can enable power signal communication (e.g., unidirectionally from the external system520to the implantable system502), or can enable data signal communication (e.g., bidirectionally between the implantable system502and the external system520). In an example, the implantable system502or the external system520can be wirelessly coupled for power or data communications with one or more other devices, including other implantable or implanted devices, such as in the same patient body. In the example ofFIG.5, the implantable system502can include an antenna504, a sensor(s)506such as comprising one or more physiologic sensors, a stimulation lead(s)508, a processor circuit510, an ultrasonic transducer512, a power storage circuit514, a stimulation signal generator circuit516, and a memory circuit518, among other components or modules. In an example, the antenna504can include a telemetry antenna such as configured for data communication between the implantable system502and the external system520. In an example, the antenna504can include an antenna, such as an NFC coil, that is configured for wireless power communication between the implantable system502and the external system520or other external power source. The processor circuit510can include a general purpose or purpose-built processor. The memory circuit518can include a long-term or short-term memory circuit, such as can include instructions executable by the processor circuit510to carry out therapy or physiologic monitoring activities for the system500. In an example, the processor circuit510of the implantable system502is configured to manage telemetry or data signal communications with the external system520, such as using the antenna504or other communication circuitry. In an example, the stimulation signal generator circuit516includes an oscillator, pulse generator, or other circuitry configured to generate electrical signals that can provide electrostimulation signals to a patient body, or to power various sensors (e.g., including the sensor(s)506), or transducers (e.g., including the ultrasonic transducer512). In an example, the stimulation signal generator circuit516can be configured to generate multiple electrical signals to provide multipolar electrostimulation therapy to multiple neural targets, such as concurrently or in a time-multiplexed manner. The stimulation signal generator circuit516can be configured to use or provide different neurostimulation signals, such as can have different pulse amplitude, pulse duration, waveform, stimulation frequency, or burst pattern characteristics. The stimulation signal generator circuit516can be used to generate therapy signals for multiple different targets concurrently. For example, signals from the stimulation signal generator circuit516can be used to stimulate one cranial nerve target to efferent effect, and to stimulate a different nerve or branch to elicit an afferent response. In another example, one cranial nerve can be blocked while another nerve is stimulated. Other combinations can similarly be used. In an example, the stimulation lead(s)508can include one or more leads that are coupled to or integrated with a housing or header of the implantable system502. The stimulation lead(s)508can be detachable from the housing to facilitate replacement or repair. In an example, the stimulation lead(s)508can include electrostimulation hardware such as electrodes having various configurations, including cuff electrodes, flat electrodes, percutaneous electrodes or other configurations suitable for electrical stimulation of nerves or nerve bodies or branches. In an example, the stimulation lead(s)508can additionally or alternatively comprise other neuromodulation therapy hardware such as the ultrasonic transducer512, drug delivery means, or a mechanical actuator, such as can be configured to modulate neural activity. The leads and/or electrodes discussed herein can have various features that can facilitate placement at, and stimulation of, one or more neural targets. A lead can have one or more electrodes that can be used for nerve stimulation, nerve blocking, or nerve sensing. The electrodes can have various surface area and spacing (e.g., spacing from other electrodes, sensors, targets, etc.) to optimize for a particular function. In an example, an electrode can comprise various materials, including low-oxidation metals or metal alloys (e.g., platinum, platinum iridium, etc.) for use in implantable systems. In an example, an electrode can be treated or coated with another material such as to promote healing or enhance charge transfer to tissue. In an example, an electrode lead can comprise one or multiple electrodes, such as can having the same or different electrode characteristics. A lead can include, for example, a spiral electrode or cuff electrode. In such an example, one or more conductive surfaces can be exposed on an inside surface of a curved or spiral cuff assembly such as can comprise a portion of a lead body. In an example, a spiral cuff assembly (and hence, electrodes) can be designed to circumferentially wrap snugly around a body of a nerve and can be self-sizing. In an example, a cuff electrode can be configured to surround a particular target to thereby direct stimulation energy to the target from multiple different directions concurrently, such as while insulating the electrode from adjacent tissue. In an example, a surface electrode or electrode array can be used. In this example, one or more electrodes can be exposed on one side of a flat or round section of a lead body. An array of electrodes of various shapes, sizes, or other characteristics, can be provided to spatially control neuromodulation therapy delivery. In an example, electrode surfaces can be oriented toward a target nerve or other structure, such as to focus an electric field provided by the electrode or electrodes. Surface electrode leads can be surgically placed by exposing the target anatomy, or can be steered using, e.g., a catheter-based delivery system from a distal surgical access point. In an example, a percutaneous electrode can be used, such as including one or more electrodes exposed on a lead that is inserted into a blood vessel (or other conducting tissue in the vicinity of a neural target) using percutaneous techniques. A percutaneous lead can be navigated by a clinician, within or through vasculature, toward target nerves or neural structures that are in close proximity to the vasculature. In an example, electrodes on a percutaneous lead can be directly on the lead body or can comprise a percutaneous structure, such as a stent-like frame or scaffold, whereby the electrodes can be oriented towards the target and away from the blood in the vessel. In an example, a bifurcated lead can be used to provide electrodes at multiple different and spaced apart anatomical targets while using a single connection to a header. In an example, a modular lead can be used such as to extend or tailor a lead to accommodate a patient's anatomy or target structures. In an example, the stimulation lead(s)508can comprise one or more electrodes that can be provided or grouped together at a distal end of a lead, such as spaced apart from a housing, or the electrodes can be distributed along a length of the lead. In an example, a lead can include multiple different electrode groups of one or more electrodes provided at different locations along a length of the lead. Additionally, a housing of the various devices discussed herein can include one or more electrodes configured for use in electrostimulation delivery. Each of the electrodes in or coupled to the implantable system502can be separately addressable by neuromodulation therapy control or coordination circuitry to deliver a coordinated therapy to one or multiple targets. Various stimulation configurations can be used with any of the electrode or lead types discussed herein. In an example, different configurations can be used to provide or modify a stimulating electric field to thereby affect an extent and manner of neural excitation. The configurations can include, for example, unipolar, bipolar, and various combinations of multipolar configurations. In a bipolar or multipolar configuration, a guard electrode can be used to help steer excitation or inhibit neural activity. In an example, an electrode configuration can be dynamically changed, such as throughout the course of a particular therapy, such as through programming changes or during operation to achieve a particular therapy. In an example, the sensor(s)506can include, among other things, electrodes for sensing of electrical activity such as using electrocardiograms (ECGs), impedance, electromyograms (EMGs) of select muscles, and/or electroneurograms (ENGs) of target cranial nerves and branches. The sensor(s)506can include pressure sensors, photoplethysmography (PPG) sensors, chemical sensors (e.g., pH, lactate, glucose, etc.) or other sensors that can be used for physiologic sensing of cardiac, respiratory, or other physiologic activity. In an example, the sensor(s)506can include an accelerometer, gyroscope or geomagnetic sensor, such as can be configured to measure patient or device movement, vibration, position, or orientation information. Other examples of the sensor(s)506are discussed elsewhere herein, including in the discussion of the machine1700and the various I/O components1742, such as including the biometric components1732, motion components1734, and environmental components1736. In an example, information from the sensor(s)506can be received by the processor circuit510and used to update or titrate a neuromodulation therapy. In an example, the implantable system502can include one or more sensor(s)506, such as can be used in providing closed-loop neuromodulation therapy that is based at least in part on physiologic status information about a patient (e.g., respiration, heart rate, blood pressure, neural or muscular activation, or other information). In an example, the sensor(s)506can be used to receive diagnostic information, or to receive information about patient movement or body position. In an example, hypoglossal nerve stimulation, such as to treat OSA, can be controlled at least in part based on information from an accelerometer or gyroscope to determine patient respiration, patient activity, and body orientation or position, such as together with information from a pressure sensor about respiration. In other words, using information from the sensor(s)506, such as including accelerometer and pressure sensors, the implantable system502can control neuromodulation therapy provided to the hypoglossal nerve, such as can include stimulation during a particular time within a respiratory cycle, and can use body position information to automatically enable therapy when, for example, the patient is sleeping. In the example ofFIG.5, the external system520can include various components that can be provided together as a unitary external device or can include multiple devices configured to work together to manage a patient therapy, manage a device such as the implantable system502, or perform other functions associated with the implantable system502. The external system520can include an antenna522, a processor circuit524, and an interface526, among other components or modules. The antenna522can comprise one or multiple antennas such as can be configured for nearfield or farfield communications with, for example, the antenna504of the implantable system502, a different implantable device or system, or other external device. In an example, the antenna522and the antenna504can be used to exchange power or data between the implantable system502and the external system520. For example, information about a prescribed therapy can be uploaded from the external system520to the implantable system502, or information about a physiologic status, such as measured by the sensor(s)506, can be downloaded from the implantable system502to the external system520. The processor circuit524can include a general purpose or purpose-built processor configured to carry out various activities on the external system520or in coordination with the implantable system502. In an example, the processor circuit524of the external system520is configured to manage telemetry or data signal communications with the implantable system502, such as using the antenna522or other communication circuitry. The interface526can include a patient or clinician interface, such as to report device information or to receive instructions or therapy parameters for implementation by the implantable system502. In an example, the interface526can include an interface or gateway to facilitate communication between the502or the external system520with a patient management system or other medical record system. Other features, modules, and components of the implantable system502and the external system520can be included in the system500to help administer various neuromodulation therapies. In an example, the systems, devices, and components discussed herein, including at least the implantable system502and the external system520of the system500, can be used to provide neuromodulation therapy to nerve targets inside a patient body, such as to treat one or more disorders or diseases. In an example, the system500or components thereof can be configured to provide neuromodulation therapy to multiple nerve targets in a coordinated manner, such as concurrently, or in a time-multiplexed sequence. In an example, the neuromodulation therapy can include one or more, or combinations of, neural stimulation and blocking signals, such as can be directed to afferent or efferent nerve structures or targets to trigger different responses. The therapy can optionally include using vector-based stimulation configurations to target particular nerves or nerve regions, or can include more relatively targeted or isolated nerve fibers. In an example, a coordinated neuromodulation therapy can include blocking at a first nerve target, while stimulating a second nerve target, or concurrently (or in time-sequence) stimulating multiple different nerve targets. In an example, the particular patient disorder or disease can dictate the particular neural target to modulate with a neuromodulation therapy. For example, to treat obstructive sleep apnea using the system500, various cranial nerves can be targeted individually or together, such as including the trigeminal nerve (e.g., the V3 mandibular branch of the trigeminal nerve428), the hypoglossal nerve418(e.g., including one or more branches thereof), the glossopharyngeal nerve412, the vagus nerve416, or the facial nerve402(e.g., including various extracranial branches thereof). In an example, the system500can be used to treat OSA by providing a neuromodulation therapy to or including the mandibular branch of the trigeminal nerve428and the hypoglossal nerve418. In this example, neuromodulation of the mandibular branch of the trigeminal nerve428can influence motor control of the mylohyoid muscle114or the anterior digastric muscle204, and neuromodulation of the hypoglossal nerve418can influence motor control of muscles in the tongue406. In an example, the system500can be used to treat OSA by providing a neuromodulation therapy to or including the facial nerve402and to the hypoglossal nerve418. In this example, neuromodulation of the facial nerve402can influence motor control of the stylohyoid muscle304or the posterior digastric muscle208, and neuromodulation of the hypoglossal nerve418can influence motor control of muscles in the tongue406. In an example, the system500can be used to treat OSA by providing a neuromodulation therapy to or including the glossopharyngeal nerve412and the hypoglossal nerve418. In this example, neuromodulation of the glossopharyngeal nerve412can influence motor control of the stylopharyngeus muscle, and neuromodulation of the hypoglossal nerve418can influence motor control of muscles in the tongue406. In an example, the system500can be used to treat OSA by providing a neuromodulation therapy to or including various branches of the hypoglossal nerve418, including anterior branches, posterior branches, or multiple branches concurrently, including or using a bilateral configuration to target branches on opposite sides of the midline102of a patient. The neuromodulation of the hypoglossal nerve418can influence motor control of various muscles in the tongue406. In an example, neuromodulation therapy that includes stimulating or blocking the hypoglossal nerve418can be combined with therapy that targets one or more of the mandibular branch of the trigeminal nerve428(e.g., to influence motor control of the mylohyoid muscle114or the anterior digastric muscle204), the facial nerve402(e.g., to influence motor control of the stylohyoid muscle304or the posterior digastric muscle208), or the glossopharyngeal nerve412(e.g., to influence motor control of the stylopharyngeus muscle), among others. Any one or more branches of the hypoglossal nerve418can receive a neuromodulation therapy from the implantable system502. For example, any one or more of the posterior branches of the hypoglossal nerve418can receive neuromodulation, including for example “branches” off the hypoglossal nerve sheath such as the descending branch, also referred to as the superior root of the ansa cervacalis, the thyrohyoid branch, or the geniohyoid branch. Any one or more of the anterior branches of the hypoglossal nerve418can receive neuromodulation, including for example where a main trunk of the hypoglossal nerve418branches to the muscles of the tongue, also referred to as the muscular branch (B6), or including the muscular branch itself. The muscular branch can include sub-branches or nerve fibers that innervate specific muscles of the tongue. In an example, the system500can be used to treat OSA or other disorders or diseases such as heart failure, hypertension, atrial fibrillation, epilepsy, depression, stroke, autism, inflammatory bowel disease, chronic inflammation, chronic pain (e.g., in cervical regions, in the lower back, or elsewhere), tinnitus, or rheumatoid arthritis, among others, such as by providing a neuromodulation therapy to or including the vagus nerve416. Neuromodulation of the vagus nerve416can influence parasympathetic tone to thereby treat or alleviate symptoms associated with the various diseases or disorders mentioned, among others. In an example, a therapy that includes stimulation of the vagus nerve416can include therapy provided to one or more branches of the hypoglossal nerve418, the mandibular branch of the trigeminal nerve428, the facial nerve402, or the glossopharyngeal nerve412. In an example, neuromodulation therapy that includes stimulating or blocking a portion of the vagus nerve416can be combined with therapy that targets one or more of the glossopharyngeal nerve412(e.g., to further influence parasympathetic tone), the carotid sinus (e.g., to stimulate a baroreceptor response), or the superior cervical ganglion or branches thereof (e.g., to influence sympathetic tone). In an example, a neuromodulation therapy for treatment of heart failure, hypertension, and/or atrial fibrillation can include therapy provided to or including one or more of the glossopharyngeal nerve412(e.g., to influence parasympathetic tone, such as via communication to the vagus nerve416), the superior cervical ganglion (e.g., to influence sympathetic tone), or the carotid sinus (e.g., to stimulate a baroreceptor response). In an example, the system500can be configured to treat heart failure, hypertension, migraine headaches, xerostomia, or other diseases or disorders by providing a neuromodulation therapy to or including the glossopharyngeal nerve412. Stimulation or blocking of the glossopharyngeal nerve412can, for example, influence parasympathetic tone or can affect motor activity of the stylopharyngeus muscle. In an example, the system500can be configured to treat drug-refractory epilepsy, depression, post-traumatic stress disorder (PTSD), migraine headaches, attention-deficit hyperactivity disorder (ADHD), craniofacial pain syndrome, among other diseases and disorders, such as by providing a neuromodulation therapy to or including the mandibular branch of the trigeminal nerve428. In an example, the system500can be configured to treat craniofacial pain syndrome, or facial palsy, among other things, such as by providing a neuromodulation therapy to or including the facial nerve402, such as including various extracranial branches or roots thereof. In an example, the system500can be configured to treat fibromyalgia such as by providing a neuromodulation therapy to or including the spinal accessory nerve, such as to target the trapezius muscle, which is understood to be a potential trigger point for fibromyalgia. In an example, the system500can be configured to treat migraine headaches or tinnitus, such as by providing a neuromodulation therapy to or including a great occipital nerve, such as can be accessed using electrodes implanted in the cervical region of a patient. Neuromodulation therapies can thus be provided using the system500, or using components thereof, to treat a variety of different diseases or disorders. The therapies can include targeted, single-location stimulation or blocking (e.g., using electrical pulses, ultrasonic signals, or other energy) therapy at one of the locations mentioned herein (among others) or can include coordinated stimulation or blocking across or using multiple different locations. The following discussion illustrates several examples of different implantation locations and neural targets, however, others including those specifically mentioned above, can similarly be used. In an example, the implantable system502can comprise various devices that can be implanted in various different areas of the body, including in a cervical region. The examples ofFIG.3, andFIG.6throughFIG.13, illustrate generally different examples of the implantable system502such as implanted in various different cervical locations. FIG.6illustrates generally a first example600that includes a first implantable device608implanted in the submandibular triangle206of a patient. In the first example600, the first implantable device608can be coupled to an anatomic structure in the submandibular triangle206, such as using a suture, anchor, or other affixation means. In an example, the first implantable device608can be coupled to one or more of the mandible116, the anterior digastric muscle204, the posterior digastric muscle208, the mylohyoid muscle114, the digastric tendon602, or other bone, tendon, muscle, or other structure that is in or adjacent to the submandibular triangle206. In the example ofFIG.6, the first implantable device608can be provided near, but spaced apart from, a submandibular gland604of the patient. In the example ofFIG.6, the first implantable device608includes a first header610. The first header610can be used to couple one or multiple electrode leads, sensor leads, or other devices to the first implantable device608. For example, the first header610can be used to couple the first implantable device608to a first electrode lead606, and the first electrode lead606can be tunneled to a cranial nerve target. Electrodes configured to deliver electrostimulation signals to the nerve target can be situated at or adjacent to the target. In an example, the first electrode lead606can be tunneled to a hypoglossal nerve in or near an anterior cervical region of a patient. In the example ofFIG.6, the first implantable device608is shown with one header. The first implantable device608can optionally include multiple headers to interface the first implantable device608with one or multiple other leads, such as electrode leads, sensor leads, communication coils, or other devices. Referring again toFIG.4, for example, the implantable therapy device426can include multiple headers, such as coupled to the respective different leads that extend from opposite sides of a body of the implantable therapy device426. FIG.7illustrates generally a second example700that includes a second implantable device702implanted in the submandibular triangle206of a patient. In the second example700, the second implantable device702can be coupled to an anatomic structure in the submandibular triangle206, such as using a suture, anchor, or other affixation means. In an example, the second implantable device702can be coupled to one or more of the mandible116, the anterior digastric muscle204, the posterior digastric muscle208, the mylohyoid muscle114, or other bone, tendon, muscle, or other structure that is in or adjacent to the submandibular triangle206. The example of the second implantable device702includes an elongate housing structure with respective headers on opposite side ends of the device. For example, the second implantable device702includes a first header704coupled to the first electrode lead606, such as can be tunneled to a first cranial nerve target. The second implantable device702can include a second header706coupled to a second electrode lead712and to a first data and power communication lead714. The second electrode lead712can be coupled to a second cranial nerve target. In an example, the first data and power communication lead714can couple the second implantable device702to a wireless communication coil710. The wireless communication coil710can be configured to facilitate data or power signal communication with a wireless external device, such as external to the patient. In an example, the wireless communication coil710comprises the antenna504that can be used to communicate with the external system520. Power or data signals received using the wireless communication coil710can be communicated to the second implantable device702and stored or used. In the example ofFIG.7, the wireless communication coil710can be coupled or mounted to a first coil support708. The first coil support708and the wireless communication coil710can comprise a flexible structure that can be positioned at or near a tissue interface of a patient, such as under the skin and adjacent to muscle, bone, or other tissue. For example, the first coil support708can be provided at or adjacent to a surface of the anterior digastric muscle204and facing away from the patient body. In another example, the first coil support708can be provided interiorly to the anterior digastric muscle204, or behind the anterior digastric muscle204in the view ofFIG.7. The first coil support708can be otherwise oriented elsewhere in the anterior triangle104of the patient and can be coupled to the second implantable device702by tunneling the first data and power communication lead714. For example, the first coil support708can be provided under a chin region, such as at or near a tip of the submental triangle202of the patient, away from the hyoid bone110. FIG.8illustrates generally a third example800that includes a submental implantable device802implanted in the submental triangle202of a patient. In the third example800, the submental implantable device802can be coupled to an anatomic structure in the submental triangle202, such as using a suture, anchor, or other affixation means. In an example, the submental implantable device802can be coupled to one or more of the mylohyoid muscle114, the anterior digastric muscle204, the hyoid bone110, or other bone, tendon, muscle, or other structure that is in or adjacent the submental triangle202. The submental implantable device802can be installed adjacent to, or at least partially under the anterior digastric muscle204, such as between the anterior digastric muscle204and the underlying mylohyoid muscle114. The example of the submental implantable device802includes an elongate housing structure with at least one header on a first side end of the device. In the example ofFIG.8, the submental implantable device802is coupled to an electrode lead806that can be tunneled to a first cranial nerve target. The submental implantable device802can be coupled to a submandibular communication coil804, such as using a second data and power communication lead808. In the example ofFIG.8, the submandibular communication coil804can be coupled or mounted to a second coil support810. The second coil support810and the submandibular communication coil804can comprise a flexible structure that can be positioned at or near a tissue interface of a patient, such as under the skin and adjacent to muscle, bone, or other tissue. For example, the second coil support810can be provided at or adjacent to a surface of at least one of the posterior digastric muscle208and the anterior digastric muscle204, and can be oriented such that the submandibular communication coil804faces away from the patient body, in another example, the first coil support708can be provided interiorly to the digastric muscles, such as adjacent to the mylohyoid muscle114. FIG.9illustrates generally a fourth example900that includes a third implantable device902implanted in the carotid triangle302of a patient. In the fourth example900, the third implantable device902can be coupled to an anatomic structure in the carotid triangle302, such as using a suture, anchor, or other affixation means. In an example, the third implantable device902can be coupled to one or more of the SCM106, the omohyoid muscle306, the hyoid bone110, or other bone, tendon, muscle, or other structure that is in or adjacent to the carotid triangle302. The example of the third implantable device902includes an elongate housing structure with at least one header on a first side end of the device. In the example ofFIG.9, the third implantable device902is coupled to a multipolar electrode lead904that can be tunneled to a cranial nerve target. For example, an electrode array906of the multipolar electrode lead904can be disposed at or near a nerve target (or targets) outside of the carotid triangle302, and the multipolar electrode lead904can be tunneled to the carotid triangle302to couple with the third implantable device902. In an example, the electrode array906can be provided at or near a hypoglossal nerve418of the patient, such as in or near the submandibular triangle206. FIG.10illustrates generally an example of a first segmented device1000. The first segmented device1000can be an implantable device that is configured for implantation at or in an anterior cervical region of a patient. For example, the first segmented device1000can be configured to be implanted in one or multiple different triangles of the cervical region, as further described below. That is, different segments or portions of the first segmented device1000can be implanted in respective different triangles in a cervical region of a patient. In an example, the first segmented device1000comprises the implantable system502from the example ofFIG.5. The first segmented device1000includes a first housing1004and a second housing1006that can be connected using a flexible housing coupling1014. The first segmented device1000can include a first cuff electrode1002(e.g., comprising one or multiple electrodes) that is coupled to the first housing1004using an electrode lead1010. The first segmented device1000can further include a communication coil1008, such as can be electrically coupled to the second housing1006using a power and data lead1016. The communication coil1008can be coupled to a support member1012that can help maintain the coil in a configuration suitable for wireless communications with an external transmitter. In an example, the support member1012can include one or more mounting features1018to couple the support member1012, and therefore the communication coil1008, to an anatomical structure inside a patient body. For example, the mounting feature1018can include one or more through-holes in the support member1012that can be used to suture the support member1012to a tissue site. In an example, the support member1012can comprise a flexible, irregularly shaped flap configured for implantation and avoidance of particular structures, such as a submandibular gland or nerve to the mylohyoid. The flap can help couple the support member1012superiorly. In an example, the second housing1006comprises a power storage circuit, such as can comprise the power storage circuit514from the example ofFIG.5. The power storage circuit can comprise a battery, a capacitor bank, or other means to store electrical power, such as can be received wirelessly using the communication coil1008. In an example, the various leads and couplings that comprise the first segmented device1000can include one or more electrical conductors. Power signals, electrostimulation signals, or other signals can be communicated among the different portions of the first segmented device1000using the electrical conductors. For example, the housing coupling1014can include a power conductor such that a battery in the second housing1006can be used to power electrostimulation control circuitry in the first housing1004. In an example, the first segmented device1000can comprise component parts that can be organized in various different configurations, such as to optimize implantation or to configure the device to best match a particular patient anatomy. That is, the device can be configured to accommodate anatomic variations among different patients. For example, different lead lengths can be selected, or the orientation or position of the different components along the signal chain can be adjusted. In an example, the first housing1004or the second housing1006can use headers to connect with the various leads, or the first housing1004and the second housing1006can be integrated (e.g., attached at a point of manufacture rather than at a time of implantation) with their respective leads. By using a modular approach, component parts can be surgically updated or upgraded. In an example, respective portions of the first segmented device1000can be configured for implantation in submandibular triangle206and in the submental triangle202of a patient. That is, the first segmented device1000can be configured to extend between the triangle regions, such as across a portion of a digastric muscle. Providing the portions of the first segmented device1000in different triangles of the cervical region can help minimize interference between the first segmented device1000and patient movement, such as due to activity of the digastric muscles. In an example, the first housing1004and the second housing1006can be differently sized such that a larger of the two housings can be disposed in the particular triangular region that offers more space. Such a distributed arrangement or implantation of the components of the first segmented device1000can be helpful in maintaining patient comfort since muscles in the cervical region can be used for complex movement of the head, neck, mouth, tongue, and other areas. FIG.11illustrates generally an example of a submandibular implantable device1100. The submandibular implantable device1100can be an implantable device that is configured for implantation at or in an anterior cervical region of a patient. For example, the submandibular implantable device1100can be configured to be implanted in one or multiple different triangles of the cervical region, as further described below, That is, different segments or portions of the submandibular implantable device1100can be implanted in the same triangle region or in respective different triangle regions in a cervical region of a patient. The submandibular implantable device1100includes an implantable device housing1102that can include, among other things, power storage circuitry, electrostimulation generation circuitry, and control circuitry. Circuitry in the implantable device housing1102can be coupled to an electrode assembly1116using a power, data, and therapy signal lead1118, and the electrode assembly1116can be used to provide neuromodulation signals at a cervical neural target in a patient body. In an example, the circuitry in the implantable device housing1102can be coupled to the electrode assembly1116via a device header1110. In an example, the implantable device housing1102can be coupled to a communication coil1106using one or more conductors in the power, data, and therapy signal lead1118. The communication coil can include a power communication coil and/or a telemetry antenna. In an example, the communication coil1106can be coupled to a support member1114. The support member1114can include one or more support mounting features1108for coupling the support member1114to tissue. In an example, the support member1114can include a housing mount1104that is configured to receive or couple with the implantable device housing1102, That is, the support member1114can include a mounting structure or feature that can be configured to secure or retain the implantable device housing1102together with the support member1114. In an example, the implantable device housing1102can include various features that are configured to mate with, or to be used together with, the housing mount1104. For example, the housing mount1104can include suture holes, and the device mounting feature1112can comprise a through-hole or groove that is configured to receive a suture therein. A suture can then be used to join the implantable device housing1102to the support member1114using the housing mount1104. In an example, the support member1114can be configured to be coupled or otherwise affixed to the mylohyoid muscle114, such as in or near the submental triangle202or the submandibular triangle206of a patient. In an example, the device mounting feature1112can be configured to receive one or more sutures, bands, or flaps that are configured to loop around structures like a digastric tendon or a hyoid bone or other connective tissue, and can affix back to itself, thereby coupling the implantable device housing1102to a stable piece of the anatomy. FIG.12illustrates generally a first submandibular triangle example1200that can include or use the submandibular implantable device1100from the example ofFIG.11. In the example, portions of the submandibular implantable device1100can be implanted in a submandibular triangle region of a neck, such as between the anterior digastric muscle204and the posterior digastric muscle208. In the example ofFIG.12, the support member1114of the submandibular implantable device1100can be coupled to one or more anatomic structures in the submandibular triangle. For example, the support member1114can be coupled to the anterior digastric muscle204using an anterior suture1206, or to the posterior digastric muscle208using a posterior suture1202, or to the mylohyoid muscle114, such as using one or more other sutures. The implantable device housing1102can be coupled to the same digastric structures as the support member1114, or can be coupled to other anatomic structures in the submandibular triangle. For example, the implantable device housing1102can be coupled to the mylohyoid muscle114, such as using a housing-tissue anchor1204. In an example, the housing-tissue anchor1204can include one or more sutures that can wrap around or through a portion of the implantable device housing1102and the muscle tissue, to thereby affix the housing-tissue anchor1204to tissue inside the submandibular triangle. FIG.13illustrates generally a second submandibular triangle example1300that can include or use the submandibular implantable device1100from the example ofFIG.11. In the example, the implantable device housing1102can be coupled to the support member1114, such as using the housing mount1104. The assembly that includes the support member1114and the implantable device housing1102can be implanted in a submandibular triangle region of a neck, such as between the anterior digastric muscle204and the posterior digastric muscle208. In an example, the support mounting features1108of the support member1114can be used to couple respective sides of the assembly to the anterior digastric muscle204and the posterior digastric muscle208. The example ofFIG.13illustrates the implantable device housing1102coupled to an outward-facing first surface of the support member1114. That is,FIG.13shows the implantable device housing1102facing toward skin or away from other internal cervical structures. In an example, the implantable device housing1102can be coupled to an opposite second surface of the support member1114, such as facing inward toward the mylohyoid muscle114and other internal cervical structures. The implantable device housing1102can be coupled to the support member1114using, for example, a housing-support anchor1302, such as can include a suture, clip, cuff, or other means for coupling a flexible support substrate of the support member1114to a structural housing. The examples ofFIG.12andFIG.13illustrate generally the submandibular implantable device1100with the power, data, and therapy signal lead1118extending away from the submandibular triangle to a hypoglossal nerve target. One or more other nerve targets can similarly be accessed using one or more other leads, such as using the same support member1114and implantable device housing1102and circuitry therein, FIG.14illustrates generally an example of a method1400that can include providing a neuromodulation therapy to multiple cranial nerves. The method1400can optionally include or use the system500or other system configured for modulation of a nerve stimulation or blocking therapy. At block1402, the method1400can include providing an implantable neuromodulation device in an anterior cervical region of a patient. For example, block1402can include implanting the implantable system502(or one or more components thereof) in one or more of the submental triangle202, the submandibular triangle206, or the carotid triangle302in an anterior portion of a patient neck. In an example, block1402can include implanting or coupling multiple different housings that comprise portions of the system500to various anatomic structures that are in or that border the various triangle regions in the anterior portion of the patient neck. At block1404, the method1400can include providing a first lead, such as an electrode lead (e.g., a first instance of a stimulation lead(s)508), at or near a first cranial nerve target in the patient. Block1404can include coupling the electrode lead to signal generator circuitry in a housing such as implanted with the neuromodulation device at block1402. In an example, block1404can include implanting a lead with electrodes that are disposed at or near one or more of the hypoglossal nerve418, the glossopharyngeal nerve412, the facial nerve402, the mandibular branch of the trigeminal nerve428, the vagus nerve416, or elsewhere in or near the head or neck of the patient. In an example, the method1400can include, at block1406, providing a second lead, such as an electrode lead (e.g., a second instance of a stimulation lead(s)508), at or near a second cranial nerve target in the patient. Block1406can include coupling the electrode lead to signal generator circuitry in a housing such as implanted at block1402. In an example, block1406can include implanting a lead with electrodes that are disposed at or near one or more of the hypoglossal nerve418, the glossopharyngeal nerve412, the facial nerve402, the mandibular branch of the trigeminal nerve428, the vagus nerve416, or elsewhere in or near the head or neck of the patient. At block1408, the method1400can include applying a first neuromodulation therapy to the first cranial nerve target, such as using first electrical signals from the signal generator circuitry (e.g., using the stimulation signal generator circuit516) and using electrodes of the first electrode lead. In an example, the therapy can include electrical signals that are configured to treat a particular patient disorder, such as can include one or more of OSA, heart failure, hypertension, or one or more other disorders discussed herein, among others. At block1410, the method1400can include applying a second neuromodulation therapy to the second cranial nerve target, such as using second electrical signals from the signal generator circuitry (e.g., using the stimulation signal generator circuit516) and using electrodes of the second electrode lead. In an example, applying the first neuromodulation therapy at block1408and applying the second neuromodulation therapy at block1410can comprise portions of a common therapy that is configured to treat the same disorder or multiple disorders. Some examples of implantable device housings for cervical implantation are generally represented herein as elongate, prismatic or cylindrical structures. The housings can include enclosures that can be hermetically sealed to protect electronics, circuitry, or other contents from the internal environment of a human body. The housings can be sized and configured to occupy a minimal volume, for example, to enhance patient comfort, or to reduce a risk of infection or complication during implantation, among other reasons. In an example, a housing can be configured (e.g., sized, shaped, oriented) according to one or more characteristics of an implantation destination. For example, a shape of a housing can optionally be based on characteristics of a triangle in a cervical region of a patient. For example, differently shaped housings can be configured for use in the submental triangle202and in the submandibular triangle206. In an example, a housing for use in a triangle region can include a tapered structure such that, when the housing is implanted, the housing contours generally match or follow corresponding anatomical contours in the cervical region. FIG.15, for example, illustrates generally a tapered housing1500for a device to be implanted in or near a triangular cervical region of a patient. The tapered housing1500can include a tapered structure, such as a rectangular frustum. The illustrated example of the tapered housing1500includes a base surface1510and an opposite top surface1508. The tapered housing1500can include tapered sidewalls1502, such as can include trapezoidal portions, that can extend between the base surface1510and the top surface1508. In an example, a surface area of the top surface1508can be less than a surface area of the base surface1510. One or more headers can be coupled to or integrated with the tapered housing1500, including at or along any one or more of the side surfaces, base surface1510, and the top surface1508. In an example, the tapered housing1500can be configured for implantation inside of at least a portion of the anterior triangle104of a patient. In an example, the base surface1510can be configured for implantation at or adjacent to a portion of the mylohyoid muscle114, such that a tapered portion of the housing structure extends away from the mylohyoid muscle114. In an example, the tapered housing1500can include elongated tapered sidewalls1502, and a surface characteristic of at least one of the sidewalls can be sized or configured to correspond to, or fit partially or entirely within, contours of a triangle region of the neck, such as within the submandibular triangle206, the submental triangle202, or the carotid triangle302. For example, the first implantable device608from the example ofFIG.6can include a tapered housing with a base portion provided adjacent to the posterior digastric muscle208, and sidewalls that extend toward a region where the mandible116and anterior digastric muscle204are proximal or substantially adjacent, such that the device can occupy the submandibular triangle206. In other words, the device can include a base portion that is sized and configured to correspond to or match a length or width characteristic of the posterior digastric muscle208(e.g., between the mandible116and the hyoid bone110). The device can include a sidewall that is configured to correspond to or match a length or width characteristic of the anterior digastric muscle204(e.g., between the hyoid bone110and the mandible116), or the device can include a sidewall that is configured to correspond to or match a length or width characteristic of a lower edge portion of the mandible116(e.g., between the posterior digastric muscle208and the anterior digastric muscle204). In an example, the third implantable device902from the example ofFIG.9can include a tapered housing with a base portion provided, for example, adjacent to the omohyoid muscle306, and sidewalls that extend toward a region where the SCM106and the posterior digastric muscle208are proximal or substantially adjacent. In other words, the device can include a base portion that is sized and configured to correspond to or match a length or width characteristic of the portion of the omohyoid muscle306(e.g., a portion of the omohyoid muscle306that is inside the carotid triangle302). The device can include a sidewall that is configured to correspond to or match a length or width characteristic of the SCM106(e.g., a portion of the SCM106that is inside the carotid triangle302), or the device can include a sidewall that is configured to correspond to or match a length or width characteristic of the posterior digastric muscle208(e.g., a portion of the posterior digastric muscle208bounding the carotid triangle302, such as between the hyoid bone110and the SCM106). Accordingly, the third implantable device902can be configured with a housing that occupies the carotid triangle302. In other examples, the tapered housing1500can be configured for implantation at or adjacent to various other muscles, tendons, bones, or tissues, such as at or adjacent to a portion of the digastric muscle112, the SCM106, the omohyoid muscle306, or other tissue. Such devices or housings can be configured to occupy all or substantially all of a space available in a triangle region in a neck, such as the submandibular triangle206, the submental triangle202, or the carotid triangle302, among others. The example ofFIG.15illustrates the tapered housing1500as including various abrupt edges or vertices. One or more of the edges or vertices, or adjacent surfaces, can optionally be chamfered or rounded. In an example, the tapered housing1500can include a base or top surface that is at least partially rounded, such that the housing structure is at least partially (or entirely) a conical frustum. In an example, the top surface1508or the base surface1510can be non-planar, and the top surface1508and the base surface1510can be at least partially non-parallel. In an example, the tapered housing1500can include various headers on one or more of the surfaces or faces of the housing. In the example ofFIG.15, the tapered housing1500includes a first header1504and a second header1506. The headers can be configured to couple circuitry, sensors, or other components inside of the tapered housing1500with leads or other devices outside of the tapered housing1500. In the example ofFIG.15, the first header1504and the second header1506are provided on adjacent side surfaces of the housing; other positions for the headers can similarly be used. In an example, a position of one or more of the headers can be influenced or determined by an implantation location or a nerve target location. FIG.16illustrates generally an example of a cylindrical housing1600for a device to be implanted in or near a triangular cervical region of a patient. The cylindrical housing1600can include a capsule-shaped structure, such as including a cylinder that extends along a longitudinal axis and includes rounded ends or caps. The cylindrical housing1600can enclose signal generator circuitry1606and can have multiple headers, such as a first header1602and a second header1604, for interfacing the signal generator circuitry1606with various leads. The first header1602and the second header1604can be disposed at opposite ends of the device or multiple headers can be provided on one end. In an example, the cylindrical housing1600can be configured for implantation along a portion of an anatomic target. For example, the cylindrical housing1600can be configured to be coupled to a tissue target in a triangular region. For example, the cylindrical housing1600can be configured to be coupled to the anterior digastric muscle204, or to the posterior digastric muscle208, or to the SCM106. In an example, the cylindrical housing1600can be configured to be coupled to the SCM106inside of the carotid triangle302, and the cylindrical housing1600can be coupled to a lead that extends outside of the carotid triangle302, such as similarly described above in the example ofFIG.9. FIG.17is a diagrammatic representation of a machine1700within which instructions1708(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine1700to perform any one or more of the methodologies discussed herein may be executed. The machine1700can optionally comprise the implantable system502, the external system520, or components or portions thereof, or components or devices that can be coupled to at least one of the implantable system502and the external system520. In an example, the instructions1708may cause the machine1700to execute any one or more of the methods, controls, therapy algorithms, signal generation routines, or other processes described herein. The instructions1708transform the general, non-programmed machine1700into a particular machine1700programmed to carry out the described and illustrated functions in the manner described. The machine1700may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine1700may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1700can comprise, but is not limited to, various systems or devices that can communicate with the implantable system502or the external system520, such as can include a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1708, sequentially or otherwise, that specify actions to be taken by the machine1700. Further, while only a single machine1700is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions1708to perform any one or more of the methodologies discussed herein. The machine1700may include processors1702, memory1704, and I/O components1742, which may be configured to communicate with each other via a bus1744. In an example embodiment, the processors1702(e.g., a Central Processing Unit (CPU), a Reduced instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor1706and a processor1710that execute the instructions1708. The term “processor” is intended to optionally include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.17shows multiple processors1702, the machine1700may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory1704includes a main memory1712, a static memory1714, and a storage unit1716, both accessible to the processors1702via the bus1744. The main memory1704, the static memory1714, and storage unit1716store the instructions1708embodying any one or more of the methodologies or functions described herein. The instructions1708may also reside, completely or partially, within the main memory1712, within the static memory1714, within a machine-readable medium1718within the storage unit1716, within at least one of the processors1702(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1700. The I/O components1742may include a variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components1742that are included in a particular machine will depend on the type of machine. For example, portable machines such as device programmers or mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components1742may include other components that are not shown inFIG.17. In various example embodiments, the I/O components1742may include output components1728and input components1730. The output components1728may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components1730may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), physiologic sensor components, and the like. In further example embodiments, the I/O components1742may include biometric components1732, motion components1734, environmental components1736, or position components1738, among others. For example, the biometric components1732can include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components1734can include an acceleration sensor (e.g., an accelerometer), gravitation sensor components, rotation sensor components (e.g., a gyroscope), or similar. The environmental components1736can include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components1738can include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components1742further include communication components1740operable to couple the machine1700to a network1720or other devices1722via a coupling1724and a coupling1726, respectively. For example, the communication components1740may include a network interface component or another suitable device to interface with the network1720. In further examples, the communication components1740may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth components, or Wi-Fi components, among others. The devices1722may be another machine or any of a wide variety of peripheral devices such as can include other implantable or external devices. The various memories (e.g., memory1704, main memory1712, static memory1714, and/or memory of the processors1702) and/or storage unit1716can store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions1708), when executed by processors1702, cause various operations to implement the disclosed embodiments, including various neuromodulation or neurostimulation therapies or functions supportive thereof. The following Aspects provide a non-limiting overview of the neuromodulation systems, methods, and devices discussed herein. Aspect 1 can include, or can optionally be combined with the subject matter of one or any combination of the following Aspects, to include or use subject matter (such as an apparatus, a method, a means for performing acts, or a machine readable medium including instructions that, when performed by the machine, that can cause the machine to perform acts), such as can include or use an implantable system for neuromodulation of cranial nerves, the system comprising a first housing configured for implantation in an anterior cervical region of a patient, at or under a mandible of the patient, a first electrode lead coupled to the first housing, the first electrode lead comprising at least one electrode configured to be disposed at or near a first cranial nerve target in the patient, and a signal generator circuit provided in the first housing and configured to generate electrical neuromodulation signals for delivery to the cranial nerve target using the at least one electrode of the first electrode lead. The neuromodulation signals can be configured to treat a breathing disorder or a sleep disorder of the patient, among other disorders, such as can be treated using a neuromodulation therapy applied to a cranial nerve or other nerve. Aspect 2 can include or use, or can optionally be combined with the subject matter of Aspect 1, to optionally include the neuromodulation signals generated by the signal generator circuit configured to treat obstructive sleep apnea. Aspect 3 can include or use, or can optionally be combined with the subject matter of Aspect 2, to optionally include the first cranial nerve target comprising a main body of a hypoglossal nerve of the patient or a branch of the hypoglossal nerve of the patient. Aspect 4 can include or use, or can optionally be combined with the subject matter of Aspect 3, to optionally include or use a second electrode lead coupled to the first housing, the second electrode lead comprising at least one electrode configured to be disposed at or near a second cranial nerve target in the patient, and the signal generator circuit can be configured to generate respective neuromodulation signals for delivery to the first and second cranial nerve targets using electrodes on the first and second electrode leads to treat obstructive sleep apnea or one or more other diseases or disorders. Aspect 5 can include or use, or can optionally be combined with the subject matter of Aspect 4, to optionally include the second cranial nerve target comprising a branch of a trigeminal nerve of the patient. Aspect 6 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 4 or 5, to optionally include the second cranial nerve target comprising a branch of a facial nerve of the patient. Aspect 7 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 4 through 6 to optionally include the second cranial nerve target comprising a ganglion or a branch of a glossopharyngeal nerve of the patient. Aspect 8 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 4 through 7 to optionally include the signal generator circuit configured to provide the neuromodulation signals concurrently to the electrodes of the first and second electrode leads. Aspect 9 can include or use, or can optionally be combined with the subject matter of Aspect 8, to optionally include or use an electrostimulation vector such as can be produced in response to a first one of the neuromodulation signals. The vector can be configured to modify a different electrostimulation vector such as can be produced in response to a second one of the neuromodulation signals. Aspect 10 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 4 through 9 to optionally include or use the signal generator circuit configured to provide the neuromodulation signals to respective electrodes of the first and second electrode leads in a time-multiplexed manner. Aspect 11 can include or use, or can optionally be combined with the subject matter of Aspect 10, to optionally include or use the signal generator circuit configured to provide the neuromodulation signals as electrical signal pulses that are at least partially overlapping in time. Aspect 12 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 1 through 11 to optionally include the first cranial nerve target comprising a neural pathway that influences activity of one or more of tongue muscles, mylohyoid muscles, stylohyoid muscles, digastric muscles, or stylopharyngeus muscles of the patient. In the example of Aspect 12, the electrical neuromodulation signals can be configured to treat obstructive sleep apnea or another disorder for the patient. Aspect 13 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 1 through 12 to optionally include the first cranial nerve target in the patient comprising an anterior or posterior branch of a hypoglossal nerve of the patient. In Aspect 13, the first electrode can be configured to be implanted at or near the anterior or posterior branch of the hypoglossal nerve of the patient. Aspect 14 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 1 through 13 to optionally include or use first and second electrodes disposed at different locations along a length of the first electrode lead. In the example of Aspect 14, the first cranial nerve target in the patient can include anterior and/or posterior branches of a hypoglossal nerve of the patient, and the first and second electrodes can be configured to provide neuromodulation signals to the anterior and/or posterior branches of the hypoglossal nerve. Aspect 15 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 1 through 14 to optionally include or use a second electrode lead coupled to the first housing, and the second electrode lead can include at least one electrode configured to be disposed at or near a second cranial nerve target in the patient. In Aspect 15, the first and second cranial nerve targets can be on opposite sides of a sagittal midline of the patient. Aspect 16 can include or use, or can optionally be combined with the subject matter of Aspect 15, to optionally include or use the first housing comprising first and different second hermetic enclosures that are electrically coupled. Aspect 17 can include or use, or can optionally be combined with the subject matter of Aspect 16, to optionally include or use the first and second electrode leads coupled to the first and different second hermetic enclosures, respectively. Aspect 18 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 16 or 17 to optionally include or use the first hermetic enclosure comprising a power storage device, and the second hermetic enclosure comprising the signal generator, and the first and second electrode leads coupled to the second hermetic enclosure. Aspect 19 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 16 through 18 to optionally include or use the first and different second hermetic enclosures implanted on opposite sides of a sagittal midline of the patient. Aspect 20 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 16 through 19 to optionally include or use the first and second hermetic enclosures configured to be implanted in respective different anterior triangle regions. Aspect 21 can include or use, or can optionally be combined with the subject matter of Aspect 20, to optionally include the first hermetic enclosure implanted in a submandibular triangle region of the patient, and the second hermetic enclosure implanted in a muscular triangle region of the patient. In the example of Aspect 21, the submandibular triangle region can be bounded by a body of a mandible and by anterior and posterior portions of a digastric muscle of the patient, and the muscular triangle region of the patient can be bounded by a hyoid bone, a sagittal midline, an omohyoid muscle, and an inferior portion of an sternocleidomastoid muscle of the patient. Aspect 22 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 20 or 21 to optionally include or use the first hermetic enclosure configured to be implanted in a carotid triangle region and the second hermetic enclosure can be configured to be implanted in one of a submandibular triangle region and a submental triangle region of the patient. Aspect 23 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 1 through 22 to optionally include or use a wireless communication coil coupled to a power management circuit in the first housing, and the wireless communication coil can be configured to be disposed in the anterior cervical region of the patient or outside of the anterior cervical region of the patient. Aspect 24 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 1 through 22 to optionally include or use a wireless communication coil coupled to a power management circuit in the first housing, and the wireless communication coil can be configured to be disposed on a mandible of the patient. Aspect 25 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 1 through 22 to optionally include or use the first housing and a wireless communication coil, such as with the coil coupled to circuitry inside the first housing, and configured to be implanted in respective different anterior triangle regions of the patient. Aspect 26 can include or use, or can optionally be combined with the subject matter of Aspect 25, to optionally include or use a support member for the wireless communication coil, and the support member can be configured to be coupled to anterior and posterior portions of a digastric muscle of the patient. Aspect 27 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 25 or 26 to optionally include or use a support member for the wireless communication coil, and the support member can be configured to be coupled to a mylohyoid muscle of the patient. Aspect 28 can include, or can optionally be combined with the subject matter of one or any combination of the other Aspects herein to include or use subject matter (such as an apparatus, a method, a means for performing acts, or a machine readable medium including instructions that, when performed by the machine, that can cause the machine to perform acts), such as can include or use an implantable neuromodulation system comprising an elongate first housing configured for implantation in an anterior cervical region of a patient, and a first electrode lead coupled to the first housing and configured to be disposed in a submandibular region. In Aspect 28, at least one electrode on the first electrode lead can be configured to be disposed at or near a first branch of a hypoglossal nerve of the patient, and electrostimulation generation and control circuitry disposed in the first housing can be configured to provide electrostimulation signals to the patient using the first electrode lead. The electrostimulation signals can be configured to treat a sleep disorder or breathing disorder of the patient, among other disorders. Aspect 29 can include or use, or can optionally be combined with the subject matter of Aspect 28, to optionally include or use the first housing configured for implantation in a submental triangle of the anterior cervical region of the patient. Aspect 30 can include or use, or can optionally be combined with the subject matter of Aspect 29, to optionally include or use a second electrode lead coupled to the first housing and configured to be disposed in the submandibular region. In Aspect 30, at least one electrode on the second electrode lead can be configured to be disposed at or near a second branch of the hypoglossal nerve of the patient. Aspect 31 can include or use, or can optionally be combined with the subject matter of Aspect 30, to optionally include or use electrodes on the first and second electrode leads configured to be disposed at or near different positions of anterior and/or posterior branches of the hypoglossal nerve. Aspect 32 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 30 or 31 to optionally include or use electrodes on the first and second electrode leads configured to be disposed on respective different sides of a sagittal midline of the patient, and the electrostimulation generation and control circuitry can be configured to provide a bilateral electrostimulation therapy to the branches of the hypoglossal nerves. Aspect 33 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 28 through 32 to optionally include or use the first housing comprising a cylindrical housing structure having a longitudinal axis, and the first housing can be configured for implantation at or adjacent to a mandible of the patient. Aspect 34 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 28 through 33 to optionally include or use the first housing comprising a rectangular frustum structure with a base surface configured to be oriented posteriorly in the submandibular region, and a top surface configured to be oriented anteriorly in the submandibular region, and an area of the base surface can exceed an area of the top surface. Aspect 35 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 28 through 33 to optionally include or use the first housing comprising sidewalls that are contoured to correspond to contours of an anatomic triangle in the submandibular region. Aspect 36 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 28 through 35 to optionally include or use an anchor configured to physically and mechanically couple a base portion of the first housing to a mandible. Aspect 37 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 28 through 33 to optionally include or use the first housing comprising a truncated prism structure with a base portion that can be configured to be oriented adjacent to at least one of a digastric muscle surface, a mylohyoid muscle surface, or a mandible of the patient. Aspect 38 can include or use, or can optionally be combined with the subject matter of Aspect 37, to optionally include or use an anchor configured to couple the first housing to a hyoid bone of the patient. Aspect 39 can include or use, or can optionally be combined with the subject matter of Aspect 37, to optionally include or use an anchor to couple the first housing to at least one of an omohyoid muscle, a digastric muscle, or a digastric tendon of the patient. Aspect 40 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 28 through 39 to optionally include the first housing configured for implantation such that a longitudinal axis of the housing can be provided substantially parallel to a sternocleidomastoid muscle of the patient. Aspect 41 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 28 through 40 to optionally include or use a second housing configured for implantation in the anterior cervical region of the patient. The second housing can be electrically coupled to at least one of the first housing and the first electrode lead. Aspect 42 can include or use, or can optionally be combined with the subject matter of Aspect 41, to optionally include the first and second housings configured for implantation on respective different sides of a sagittal midline of the patient. Aspect 43 can include, or can optionally be combined with the subject matter of one or any combination of the other Aspects herein to include or use subject matter (such as an apparatus, a method, a means for performing acts, or a machine readable medium including instructions that, when performed by the machine, that can cause the machine to perform acts), such as can include or use a method for treating a sleep disorder or a breathing disorder of a patient, the method comprising providing an implantable neuromodulation device in an anterior cervical region of a patient, providing a first electrode lead, coupled to signal generator circuitry in the device, at or near a first cranial nerve target in the patient, and applying a first neuromodulation signal to the first cranial nerve target using first electrical signals from the signal generator circuitry and using electrodes of the first electrode lead. In Aspect 43, the first electrical signals can be configured to treat the sleep disorder or breathing disorder of the patient. Aspect 44 can include or use, or can optionally be combined with the subject matter of Aspect 43, to optionally include applying the first neuromodulation signal to a hypoglossal nerve of the patient to treat obstructive sleep apnea. Aspect 45 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 43 or 44 to optionally include applying a neuromodulation therapy to one or more of a hypoglossal nerve, a trigeminal nerve, a vagus nerve, a glossopharyngeal nerve, and a facial nerve of the patient. Aspect 46 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 43 through 45 to optionally include coupling the housing and the electrode lead to tissue in the anterior cervical region of the patient. Aspect 47 can include or use, or can optionally be combined with the subject matter of Aspect 46, to optionally include coupling the housing to a digastric muscle or to a digastric tendon inside the patient body. Aspect 48 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 46 or 47 to optionally include coupling the housing to a mylohyoid muscle of the patient. Aspect 49 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 43 through 48 to optionally include providing a second electrode lead, coupled to the signal generator circuitry in the device housing, at or near a second cranial nerve target in the patient, and applying a second neuromodulation signal to the second cranial nerve target using second electrical signals from the signal generator circuitry and using electrodes of the second electrode lead. In Aspect 49, the second electrical signals can be configured to treat one or more of heart failure, hypertension, and atrial fibrillation. Aspect 50 can include or use, or can optionally be combined with the subject matter of Aspect 49, to optionally include applying the first neuromodulation signal to a hypoglossal nerve, and applying the second neuromodulation signal to at least one of a vagus nerve, a facial nerve, and a glossopharyngeal nerve. Aspect 51 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 49 or 50 to optionally include applying the first and second neuromodulation signals concurrently to the first and second cranial nerve targets. Aspect 52 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 49 through 51 to optionally include applying the neuromodulation signals in a time-multiplexed manner to the first and second cranial nerve targets. Aspect 53 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 49 through 52 to optionally include applying respective pulse signals to the targets, and the pulses can be at least partially overlapping in time. Aspect 54 can include, or can optionally be combined with the subject matter of one or any combination of the other Aspects herein to include or use subject matter (such as an apparatus, a method, a means for performing acts, or a machine readable medium including instructions that, when performed by the machine, that can cause the machine to perform acts), such as can include or use an implantable neuromodulation system comprising a first housing disposed in a first cervical triangle region of a patient, a second housing disposed in a different second cervical triangle region of the patient, and an interface coupling first circuitry in the first housing and second circuitry in the second housing. In Aspect 54, the first circuitry can include signal generator circuitry configured to generate neuromodulation signals to treat a breathing disorder or a sleep disorder of the patient, among other disorders, and the second circuitry can include a power storage device. Aspect 55 can include or use, or can optionally be combined with the subject matter of Aspect 54, to optionally include the first and second cervical triangle regions being separated by a portion of a digastric muscle of the patient. Aspect 56 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 54 or 55 to optionally include or use circuitry configured to wirelessly receive a power signal from a source external to a body of the patient. Aspect 57 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 54 through 56 to optionally include the first housing configured to be implanted in one of a submandibular triangle and a submental triangle of the patient, and the second housing configured to be implanted in the other one of the submandibular triangle and the submental triangle of the patient, and the first and second housings can be differently sized and shaped. Aspect 58 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 54 through 57 to optionally include the second housing being volumetrically larger than the first housing. Aspect 59 can include or use, or can optionally be combined with the subject matter of one or any combination of Aspects 54 through 58 to optionally include or use one or more physiologic status sensors disposed in or coupled to one of the first and second housings. In Aspect 59, the one or more physiologic status sensors can be configured to measure information about a respiration, heart rate, blood pressure, sympathetic tone, parasympathetic tone, posture, activity level, body impedance, or electric activity of the patient. In Aspect 59, the signal generator circuitry can be configured to generate the neuromodulation signals to treat obstructive sleep apnea or other disorder based on the information from the physiologic status sensor. Each of these non-limiting Aspects can stand on its own, or can be combined in various permutations or combinations with one or more of the other Aspects and examples discussed herein. The above description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to herein as “examples.” Such examples can include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein. In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In this document, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, composition, formulation, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects. Method examples described herein can be machine or computer-implemented at least in part, such as using the implantable system502, the external system520, the machine1700, or using the other systems, devices, or components discussed herein. Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods, such as neuromodulation therapy control methods, as described in the above examples, such as to treat one or more diseases or disorders. In an example, the instructions can include instructions to receive sensor data from one or more physiologic sensors and, based on the sensor data, titrate a therapy. An implementation of such methods can include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code can include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, in an example, the code can be tangibly stored on one or more volatile, non-transitory, or non-volatile tangible computer-readable media, such as during execution or at other times. Examples of these tangible computer-readable media can include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like. The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments can be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
109,083
11857793
Like reference characters denote like elements throughout the description and figures. DETAILED DESCRIPTION The disclosure describes examples of medical devices, systems, and techniques for managing the storage of sensed information, such as ECAP information. Electrical stimulation therapy is typically delivered to a target tissue (e.g., one or more nerves or muscle) of a patient via two or more electrodes. Parameters that define the electrical stimulation therapy (e.g., electrode combination, voltage or current amplitude, pulse width, pulse frequency, duty cycle, etc.) are selected by a clinician and/or the patient to provide relief from various symptoms, such as pain, muscle disorders, etc. However, as the patient moves, the distance between the electrodes and the target tissues changes. Posture changes or patient activity can cause electrodes to move closer or farther from target nerves. Lead migration over time may also change this distance between electrodes and target tissue. In some examples, a patient event may include transient patient conditions such as coughing, sneezing, laughing, Valsalva maneuvers, leg lifting, cervical motions, or deep breathing that may temporarily cause the stimulation electrodes of the medical device to move closer to the target tissue of the patient. When the electrodes are closer to the nerves, the patient's perception of electrical stimulation therapy may change. Since neural recruitment is a function of stimulation intensity and distance between the target tissue and the electrodes, movement of the electrode closer to the target tissue may result in increased perception by the patient (e.g., possible uncomfortable, undesired, or painful sensations), and movement of the electrode further from the target tissue may result in decreased efficacy of the therapy for the patient. For example, if stimulation is held consistent and the stimulation electrodes are moved closer to the target tissue, the patient may perceive the stimulation as more intense, uncomfortable, or even painful. Conversely, consistent stimulation while electrodes are moved farther from target tissue may result in the patient perceiving less intense stimulation which may reduce the therapeutic effect for the patient. Discomfort or pain caused by patient events that include transient patient conditions may be referred to herein as “transient overstimulation.” Therefore, in some examples, it may be beneficial to adjust stimulation parameters in response to patent movement or other conditions that can cause transient overstimulation. An ECAP may be evoked by a stimulation pulse delivered to nerve fibers of the patient. After being evoked, the ECAP may propagate down the nerve fibers away from the initial stimulus. Sensing circuitry of the medical device may, in some cases, detect this ECAP as an ECAP signal. Characteristics of the detected ECAP signal may indicate the distance between electrodes and target tissue is changing. For example, a sharp increase in ECAP amplitude over a short period of time (e.g., less than one second) may indicate that the distance between the electrodes and the target tissue is decreasing due to a transient patient action such as a cough. A gradual increase in ECAP amplitude over a longer period of time (e.g., days, weeks, or months) may indicate that the distance between the electrodes and the target tissue is decreasing due to long-term lead migration after the medical device is implanted. It may be beneficial to adjust one or more therapy parameter values in order to prevent the patient from experiencing uncomfortable sensations due to one or both of short-term movement of the electrodes relative to the target tissue and long-term movement of the electrodes relative to the target tissue. In order to facilitate the sensing of ECAPs, in some examples, the medical device can deliver pulses as part of a therapy (e.g., informed pulses) and also deliver a plurality of control pulses that are designed to elicit detectable ECAPs when the informed pulses do not elicit detectable ECAPs. For example, the control pulse duration may be shorter than the informed pulse to reduce or eliminate the signal artifact that is caused by the informed pulse and prevents or limited detection of the ECAP received at a sensing electrodes). In particular embodiments, the control pulse is short enough that the pulse ends prior to the arrival of all, or most, of the ECAP at the sensing electrode(s). In this manner, the medical device may interleave the plurality of control pulses with at least some informed pulses of the plurality of informed pulses. For example, the medical device may deliver informed pulses for a period of time before delivering a control pulse and sensing the corresponding ECAP (if any). The medical device can then resume delivery of the informed pulses for another period of time. In some examples, a pulse duration of the control pulses is less than a pulse duration of the informed pulses and the pulse duration of the control pulses is short enough so that the medical device can sense an individual ECAP for each control pulse. In some examples, the control pulses may provide or contribute to the therapy perceived by the patient. In some examples, the medical device can use a characteristic value that represents the sensed ECAP signal to adjust one or more parameter values that defines electrical stimulation. The characteristic value may be an amplitude value, slope of one or more peaks, area under one or more peaks of the ECAP signal, or any other value that characterizes the magnitude of the sensed ECAP signal. Although the medical device may use the characteristic value of the ECAP signal, the medical device may not store the characteristics of the respective ECAP signals or the ECAP signals themselves. Continuous storage of this ECAP information may be impractical due to limited memory capacity for data within the medical device and/or increased power consumption requires to continually process and store the ECAP information. In addition, this large amount of ECAP information may be too large for a clinician to review for pertinent information related to the patient. As described herein, devices, systems, and techniques may be configured to manage storage of sensed information. The IMD may monitor one or more characteristics of the ECAP signals over time, and the IMD may, or may not, adjust one or more parameters that at least partially define electrical stimulation. In either case, the IMD may selectively store information representative of the sensed ECAP signals. This information may be useful for further investigation into patient activity, monitoring patient symptom and/or patient disease progression, or determining efficacy of electrical stimulation therapy over time, as some examples. The ECAP information representing the sensed ECAP signals may include one or more various types of information. For example, the ECAP information may include one or more characteristics indicative of the magnitude of respective ECAP signals. These characteristics may be an amplitude between two peaks in the ECAP signal, an area under one or more peaks of the ECAP signal, a steepness of one or more slopes of the ECAP signal, or other such aspects of the ECAP signal. In addition, or alternatively, the ECAP information may include a waveform representative of the sensed ECAP signals. The IMD, or another device separate from the IMD, may store the ECAP information in a memory in response to receiving a trigger signal that requests long-term storage of at least a portion of the ECAP information. This memory may be a long-term memory that is different from a temporary memory that stores the ECAP information only for a short period of time. For example, the temporary memory may be a first-in-first-out (FIFO) memory or other rolling memory that only stores the ECAP information for a predetermined period of time or a predetermined amount of data. In addition, or alternatively, to storing the IMD information permanently (e.g., until the ECAP information can be transmitted to another device) the IMD may adjust other ECAP related collection functions, such as increase a sample rate of the ECAP signals and/or increase the rate at which the IMD senses ECAP signals. In some example, the IMD may store a marker (e.g., a timestamp) indicating the timing of the trigger signal with respect to the stored ECAP information. In this manner, the ECAP information may be analyzed with respect to the timing of an event associated with the trigger signal. The trigger signal may be a user input indicating a patient event, a characteristic of an ECAP signal exceeding a threshold, a user requested change to one or more stimulation parameters that define electrical stimulation, or any other type of event. The devices, systems, and techniques described herein may provide one or more advantages. For example, storing ECAP information in response to receiving a trigger signal may enable the storage of high fidelity ECAP information that would not be able to be otherwise stored continuously. A marker associated with the trigger signal being stored with the ECAP information may also enable the analysis of the ECAP information with respect to the event that elicited the trigger signal. In addition, storing and/or capturing higher fidelity ECAP information (e.g., information from more frequently sensed ECAP signals) in response to the trigger signal may enable improved analysis capabilities for the system while conserving battery power when such higher fidelity ECAP information is not needed for therapy and/or later analysis. In some examples, the system may store this higher fidelity ECAP information as part of a trial stimulation period in which the clinician and patient can evaluate whether or not electrical stimulation can provide effective treatment of the patient's conditions. In some examples, the IMD may deliver stimulation that includes pulses (e.g., control pulses) that contribute to therapy and also elicit detectable ECAP signals. In other examples, the IMD may deliver the stimulation pulses to include control pulses and informed pulses. Nerve impulses detectable as the ECAP signal travel quickly along the nerve fiber after the delivered stimulation pulse first depolarizes the nerve. Therefore, if the stimulation pulse delivered by first electrodes has a pulse width that is too long, different electrodes configured to sense the ECAP will sense the stimulation pulse itself as an artifact that obscures the lower amplitude ECAP signal. However, the ECAP signal loses fidelity as the electrical potentials propagate from the electrical stimulus because different nerve fibers propagate electrical potentials at different speeds. Therefore, sensing the ECAP at a far distance from the stimulating electrodes may avoid the artifact caused by a stimulation pulse with a long pulse width, but the ECAP signal may lose fidelity needed to detect changes to the ECAP signal that occur when the electrode to target tissue distance changes. In other words, the system may not be able to identify, at any distance from the stimulation electrodes, ECAPs from stimulation pulses configured to provide a therapy to the patient. Therefore, the IMD may employ control pulses configured to elicit detectable ECAPs and informed pulses that may contribute to therapeutic effects for the patient by may not elicit detectable ECAPs. In these examples, a IMD is configured to deliver a plurality of informed pulses configured to provide a therapy to the patient and a plurality of control pulses that may or may not contribute to therapy. At least some of the control pulses may elicit a detectable ECAP signal without the primary purpose of providing a therapy to the patient. The control pulses may be interleaved with the delivery of the informed pulses. For example, the medical device may alternate the delivery of informed pulses with control pulses such that a control pulse is delivered, and an ECAP signal is sensed, between consecutive informed pulses. In some examples, multiple control pulses are delivered, and respective ECAP signals sensed, between the delivery of consecutive informed pulses. In some examples, multiple informed pulses will be delivered between consecutive control pulses. In any case, the informed pulses may be delivered according to a predetermined pulse frequency selected so that the informed pulses can produce a therapeutic result for the patient. One or more control pulses are then delivered, and the respective ECAP signals sensed, within one or more time windows between consecutive informed pulses delivered according to the predetermined pulse frequency. In this manner, a medical device can deliver informed pulses from the medical device uninterrupted while ECAP signals are sensed from control pulses delivered during times at which the informed pulses are not being delivered. In other examples described herein, ECAP signals are sensed by the medical device in response to the informed pulses delivered by the medical device, and control pulses are not used to elicit ECAPs. Although electrical stimulation is generally described herein in the form of electrical stimulation pulses, electrical stimulation may be delivered in non-pulse form in other examples. For example, electrical stimulation may be delivered as a signal having various waveform shapes, frequencies, and amplitudes. Therefore, electrical stimulation in the form of a non-pulse signal may be a continuous signal than may have a sinusoidal waveform or other continuous waveform. FIG.1is a conceptual diagram illustrating an example system100that includes an implantable medical device (IMD)110configured to deliver spinal cord stimulation (SCS) therapy, processing circuitry140, and an external programmer150, in accordance with one or more techniques of this disclosure. Although the techniques described in this disclosure are generally applicable to a variety of medical devices including external devices and IMDs, application of such techniques to IMDs and, more particularly, implantable electrical stimulators (e.g., neurostimulators) will be described for purposes of illustration. More particularly, the disclosure will refer to an implantable SCS system for purposes of illustration, but without limitation as to other types of medical devices or other therapeutic applications of medical devices. As shown inFIG.1, system100includes an IMD110, leads130A and130B, and external programmer150shown in conjunction with a patient105, who is ordinarily a human patient. In the example ofFIG.1, IMD110is an implantable electrical stimulator that is configured to generate and deliver electrical stimulation therapy to patient105via one or more electrodes of electrodes of leads130A and/or130B (collectively, “leads130”), e.g., for relief of chronic pain or other symptoms. In other examples, IMD110may be coupled to a single lead carrying multiple electrodes or more than two leads each carrying multiple electrodes. As a part of delivering stimulation pulses of the electrical stimulation therapy, IMD110may be configured to generate and deliver control pulses configured to elicit ECAP signals. The control pulses may provide therapy in some examples. In other examples, IMD110may deliver informed pulses that contribute to the therapy for the patient, but which do not elicit detectable ECAPs. 1 MB110may be a chronic electrical stimulator that remains implanted within patient105for weeks, months, or even years. In other examples, IMD110may be a temporary, or trial, stimulator used to screen or evaluate the efficacy of electrical stimulation for chronic therapy. In one example, IMD110is implanted within patient105, while in another example, IMD110is an external device coupled to percutaneously implanted leads. In some examples, IMD110uses one or more leads, while in other examples, IMD110is leadless. IMD110may be constructed of any polymer, metal, or composite material sufficient to house the components of IMD110(e.g., components illustrated inFIG.2) within patient105. In this example, IMD110may be constructed with a biocompatible housing, such as titanium or stainless steel, or a polymeric material such as silicone, polyurethane, or a liquid crystal polymer, and surgically implanted at a site in patient105near the pelvis, abdomen, or buttocks. In other examples, IMD110may be implanted within other suitable sites within patient105, which may depend, for example, on the target site within patient105for the delivery of electrical stimulation therapy. The outer housing of IMD110may be configured to provide a hermetic seal for components, such as a rechargeable or non-rechargeable power source. In addition, in some examples, the outer housing of 1 MB110is selected from a material that facilitates receiving energy to charge the rechargeable power source. Electrical stimulation energy, which may be constant current or constant voltage-based pulses, for example, is delivered from IMD110to one or more target tissue sites of patient105via one or more electrodes (not shown) of implantable leads130. In the example ofFIG.1, leads130carry electrodes that are placed adjacent to the target tissue of spinal cord120. One or more of the electrodes may be disposed at a distal tip of a lead130and/or at other positions at intermediate points along the lead. Leads130may be implanted and coupled to 1 MB110. The electrodes may transfer electrical stimulation generated by an electrical stimulation generator in 1 MB110to tissue of patient105. Although leads130may each be a single lead, lead130may include a lead extension or other segments that may aid in implantation or positioning of lead130. In some other examples, IMD110may be a leadless stimulator with one or more arrays of electrodes arranged on a housing of the stimulator rather than leads that extend from the housing. In addition, in some other examples, system100may include one lead or more than two leads, each coupled to IMD110and directed to similar or different target tissue sites. The electrodes of leads130may be electrode pads on a paddle lead, circular (e.g., ring) electrodes surrounding the body of the lead, conformable electrodes, cuff electrodes, segmented electrodes (e.g., electrodes disposed at different circumferential positions around the lead instead of a continuous ring electrode), any combination thereof (e.g., ring electrodes and segmented electrodes) or any other type of electrodes capable of forming unipolar, bipolar or multipolar electrode combinations for therapy. Ring electrodes arranged at different axial positions at the distal ends of lead130will be described for purposes of illustration. The deployment of electrodes via leads130is described for purposes of illustration, but arrays of electrodes may be deployed in different ways. For example, a housing associated with a leadless stimulator may carry arrays of electrodes, e.g., rows and/or columns (or other patterns), to which shifting operations may be applied. Such electrodes may be arranged as surface electrodes, ring electrodes, or protrusions. As a further alternative, electrode arrays may be formed by rows and/or columns of electrodes on one or more paddle leads. In some examples, electrode arrays include electrode segments, which are arranged at respective positions around a periphery of a lead, e.g., arranged in the form of one or more segmented rings around a circumference of a cylindrical lead. In other examples, one or more of leads130are linear leads having 8 ring electrodes along the axial length of the lead. In another example, the electrodes are segmented rings arranged in a linear fashion along the axial length of the lead and at the periphery of the lead. The stimulation parameter of a therapy stimulation program that defines the stimulation pulses of electrical stimulation therapy by IMD110through the electrodes of leads130may include information identifying which electrodes have been selected for delivery of stimulation according to a stimulation program, the polarities of the selected electrodes, i.e., the electrode combination for the program, and voltage or current amplitude, pulse frequency, pulse width, pulse shape of stimulation delivered by the electrodes. These stimulation parameters of stimulation pulses (e.g., control pulses and/or informed pulses) are typically predetermined parameter values determined prior to delivery of the stimulation pulses (e.g., set according to a stimulation program). However, in some examples, system100changes one or more parameter values automatically based on one or more factors or based on user input. An ECAP test stimulation program may define stimulation parameter values that define control pulses delivered by IMD110through at least some of the electrodes of leads130. These stimulation parameter values may include information identifying which electrodes have been selected for delivery of control pulses, the polarities of the selected electrodes, i.e., the electrode combination for the program, and voltage or current amplitude, pulse frequency, pulse width, and pulse shape of stimulation delivered by the electrodes. The stimulation signals (e.g., one or more stimulation pulses or a continuous stimulation waveform) defined by the parameters of each ECAP test stimulation program are configured to evoke a compound action potential from nerves. In some examples, the ECAP test stimulation program defines when the control pulses are to be delivered to the patient based on the frequency and/or pulse width of the informed pulses when informed pulse are also delivered. In some examples, the stimulation defined by each ECAP test stimulation program are not intended to provide or contribute to therapy for the patient. In other examples, the stimulation defined by each ECAP test stimulation program may contribute to therapy when the control pulses elicit detectable ECAP signals and contribute to therapy. In this manner, the ECAP test stimulation program may define stimulation parameters the same or similar to the stimulation parameters of therapy stimulation programs. AlthoughFIG.1is directed to SCS therapy, e.g., used to treat pain, in other examples system100may be configured to treat any other condition that may benefit from electrical stimulation therapy. For example, system100may be used to treat tremor, Parkinson's disease, epilepsy, a pelvic floor disorder (e.g., urinary incontinence or other bladder dysfunction, fecal incontinence, pelvic pain, bowel dysfunction, or sexual dysfunction), obesity, gastroparesis, or psychiatric disorders (e.g., depression, mania, obsessive compulsive disorder, anxiety disorders, and the like). In this manner, system100may be configured to provide therapy taking the form of deep brain stimulation (DBS), peripheral nerve stimulation (PNS), peripheral nerve field stimulation (PNFS), cortical stimulation (CS), pelvic floor stimulation, gastrointestinal stimulation, or any other stimulation therapy capable of treating a condition of patient105. In some examples, lead130includes one or more sensors configured to allow IMD110to monitor one or more parameters of patient105, such as patient activity, pressure, temperature, or other characteristics. The one or more sensors may be provided in addition to, or in place of, therapy delivery by lead130. IMD110is configured to deliver electrical stimulation therapy to patient105via selected combinations of electrodes carried by one or both of leads130, alone or in combination with an electrode carried by or defined by an outer housing of IMD110. The target tissue for the electrical stimulation therapy may be any tissue affected by electrical stimulation, which may be in the form of electrical stimulation pulses or continuous waveforms. In some examples, the target tissue includes nerves, smooth muscle or skeletal muscle. In the example illustrated byFIG.1, the target tissue is tissue proximate spinal cord120, such as within an intrathecal space or epidural space of spinal cord120, or, in some examples, adjacent nerves that branch off spinal cord120. Leads130may be introduced into spinal cord120in via any suitable region, such as the thoracic, cervical or lumbar regions. Stimulation of spinal cord120may, for example, prevent pain signals from traveling through spinal cord120and to the brain of patient105. Patient105may perceive the interruption of pain signals as a reduction in pain and, therefore, efficacious therapy results. In other examples, stimulation of spinal cord120may produce paresthesia which may be reduce the perception of pain by patient105, and thus, provide efficacious therapy results. IMD110generates and delivers electrical stimulation therapy to a target stimulation site within patient105via the electrodes of leads130to patient105according to one or more therapy stimulation programs. A therapy stimulation program defines values for one or more parameters that define an aspect of the therapy delivered by IMD110according to that program. For example, a therapy stimulation program that controls delivery of stimulation by IMD110in the form of pulses may define values for voltage or current pulse amplitude, pulse width, and pulse rate (e.g., pulse frequency) for stimulation pulses delivered by IMD110according to that program. In some examples where ECAP signals cannot be detected from the types of pulses intended to be delivered to provide therapy to the patient, control pulses and informed pulses may be delivered. For example, IMD110is configured to deliver control stimulation to patient105via a combination of electrodes of leads130, alone or in combination with an electrode carried by or defined by an outer housing of IMD110. The tissue targeted by the control stimulation may be the same tissue targeted by the electrical stimulation therapy, but IMD110may deliver control stimulation pulses via the same, at least some of the same, or different electrodes. Since control stimulation pulses are delivered in an interleaved manner with informed pulses, a clinician and/or user may select any desired electrode combination for informed pulses. Like the electrical stimulation therapy, the control stimulation may be in the form of electrical stimulation pulses or continuous waveforms. In one example, each control stimulation pulse may include a balanced, bi-phasic square pulse that employs an active recharge phase. However, in other examples, the control stimulation pulses may include a monophasic pulse followed by a passive recharge phase. In other examples, a control pulse may include an imbalanced bi-phasic portion and a passive recharge portion. Although not necessary, a bi-phasic control pulse may include an interphase interval between the positive and negative phase to promote propagation of the nerve impulse in response to the first phase of the bi-phasic pulse. The control stimulation may be delivered without interrupting the delivery of the electrical stimulation informed pulses, such as during the window between consecutive informed pulses. The control pulses may elicit an ECAP signal from the tissue, and IMD110may sense the ECAP signal via two or more electrodes on leads130. In cases where the control stimulation pulses are applied to spinal cord120, the signal may be sensed by IMD110from spinal cord120. IMD110may deliver control stimulation to a target stimulation site within patient105via the electrodes of leads130according to one or more ECAP test stimulation programs. The one or more ECAP test stimulation programs may be stored in a storage device of IMD110. Each ECAP test program of the one or more ECAP test stimulation programs includes values for one or more parameters that define an aspect of the control stimulation delivered by IMD110according to that program, such as current or voltage amplitude, pulse width, pulse frequency, electrode combination, and, in some examples timing based on informed pulses to be delivered to patient105. In some examples, IMD110delivers control stimulation to patient105according to multiple ECAP test stimulation programs. A user, such as a clinician or patient105, may interact with a user interface of an external programmer150to program IMD110. Programming of IMD110may refer generally to the generation and transfer of commands, programs, or other information to control the operation of IMD110. In this manner, IMD110may receive the transferred commands and programs from external programmer150to control electrical stimulation therapy (e.g., informed pulses) and control stimulation (e.g., control pulses). For example, external programmer150may transmit therapy stimulation programs, ECAP test stimulation programs, stimulation parameter adjustments, therapy stimulation program selections, ECAP test program selections, user input, or other information to control the operation of IMD110, e.g., by wireless telemetry or wired connection. As described herein, stimulation delivered to the patient may include control pulses, and, in some examples, stimulation may include control pulses and informed pulses. In some cases, external programmer150may be characterized as a physician or clinician programmer if it is primarily intended for use by a physician or clinician. In other cases, external programmer150may be characterized as a patient programmer if it is primarily intended for use by a patient. A patient programmer may be generally accessible to patient105and, in many cases, may be a portable device that may accompany patient105throughout the patient's daily routine. For example, a patient programmer may receive input from patient105when the patient wishes to terminate or change electrical stimulation therapy. In general, a physician or clinician programmer may support selection and generation of programs by a clinician for use by IMD110, whereas a patient programmer may support adjustment and selection of such programs by a patient during ordinary use. In other examples, external programmer150may include, or be part of, an external charging device that recharges a power source of IMD110. In this manner, a user may program and charge IMD110using one device, or multiple devices. As described herein, information may be transmitted between external programmer150and IMD110. Therefore, IMD110and external programmer150may communicate via wireless communication using any techniques known in the art. Examples of communication techniques may include, for example, radiofrequency (RF) telemetry and inductive coupling, but other techniques are also contemplated. In some examples, external programmer150includes a communication head that may be placed proximate to the patient's body near the IMD110implant site to improve the quality or security of communication between IMD110and external programmer150. Communication between external programmer150and IMD110may occur during power transmission or separate from power transmission. In some examples, IMD110, in response to commands from external programmer150, delivers electrical stimulation therapy according to a plurality of therapy stimulation programs to a target tissue site of the spinal cord120of patient105via electrodes (not depicted) on leads130. In some examples, IMD110modifies therapy stimulation programs as therapy needs of patient105evolve over time. For example, the modification of the therapy stimulation programs may cause the adjustment of at least one parameter of the plurality of informed pulses. When patient105receives the same therapy for an extended period, the efficacy of the therapy may be reduced. In some cases, parameters of the plurality of informed pulses may be automatically updated. Efficacy of electrical stimulation therapy may be indicated by one or more characteristics (e.g. an amplitude of or between one or more peaks or an area under the curve of one or more peaks) of an action potential that is evoked by a stimulation pulse delivered by IMD110(i.e., a characteristic of the ECAP signal). Electrical stimulation therapy delivery by leads130of IMD110may cause neurons within the target tissue to evoke a compound action potential that travels up and down the target tissue, eventually arriving at sensing electrodes of IMD110. Furthermore, control stimulation may also elicit at least one ECAP, and ECAPs responsive to control stimulation may also be a surrogate for the effectiveness of the therapy. The amount of action potentials (e.g., number of neurons propagating action potential signals) that are evoked may be based on the various parameters of electrical stimulation pulses such as amplitude, pulse width, frequency, pulse shape (e.g., slew rate at the beginning and/or end of the pulse), etc. The slew rate may define the rate of change of the voltage and/or current amplitude of the pulse at the beginning and/or end of each pulse or each phase within the pulse. For example, a very high slew rate indicates a steep or even near vertical edge of the pulse, and a low slew rate indicates a longer ramp up (or ramp down) in the amplitude of the pulse. In some examples, these parameters contribute to an intensity of the electrical stimulation. In addition, a characteristic of the ECAP signal (e.g., an amplitude) may change based on the distance between the stimulation electrodes and the nerves subject to the electrical field produced by the delivered control stimulation pulses. In one example, each therapy pulse may have a pulse width greater than approximately 300 μs, such as between approximately 300 μs and 1000 μs (i.e., 1 millisecond) in some examples. At these pulse widths, IMD110may not sufficiently detect an ECAP signal because the therapy pulse is also detected as an artifact that obscures the ECAP signal. If ECAPs are not adequately recorded, then ECAPs arriving at IMD110cannot be compared to the target ECAP characteristic (e.g. a target ECAP amplitude), and electrical therapy stimulation cannot be altered according to responsive ECAPs. When informed pulses have these longer pulse widths, IMD110may deliver control stimulation in the form of control pulses. The control pulses may have pulse widths of less than approximately 300 μs, such as a bi-phasic pulse with each phase having a duration of approximately 100 μs. Since the control pulses may have shorter pulse widths than the informed pulses, the ECAP signal may be sensed and identified following each control pulse and used to inform IMD110about any changes that should be made to the informed pulses (and control pulses in some examples). In general, the term “pulse width” refers to the collective duration of every phase, and interphase interval when appropriate, of a single pulse. A single pulse includes a single phase in some examples (i.e., a monophasic pulse) or two or more phases in other examples (e.g., a bi-phasic pulse or a tri-phasic pulse). The pulse width defines a period of time beginning with a start time of a first phase of the pulse and concluding with an end time of a last phase of the pulse (e.g., a biphasic pulse having a positive phase lasting 100 μs, a negative phase lasting 100 μs, and an interphase interval lasting 30 μs defines a pulse width of 230 μs). In another example, a control pulse may include a positive phase lasting 90 μs, a negative phase lasting 90 μs, and an interphase interval lasting 30 μs to define a pulse width of 210 μs In another example, a control pulse may include a positive phase lasting 120 μs, a negative phase lasting 120 μs, and an interphase interval lasting 30 μs to define a pulse width of 270 μs. During delivery of control stimulation pulses defined by one or more ECAP test stimulation programs, IMD110, via two or more electrodes interposed on leads130, senses electrical potentials of tissue of the spinal cord120of patient105to measure the electrical activity of the tissue. IMD110senses ECAPs from the target tissue of patient105, e.g., with electrodes on one or more leads130and associated sense circuitry. In some examples, IMD110receives a signal indicative of the ECAP from one or more sensors, e.g., one or more electrodes and circuitry, internal or external to patient105. Such an example signal may include a signal indicating an ECAP of the tissue of patient105. Examples of the one or more sensors include one or more sensors configured to measure a compound action potential of patient105, or a physiological effect indicative of a compound action potential. For example, to measure a physiological effect indicative of a compound action potential, the one or more sensors may be an accelerometer, a pressure sensor, a bending sensor, a sensor configured to detect a posture of patient105, or a sensor configured to detect a respiratory function of patient105. In this manner, although the ECAP may be indicative of a posture change or other patient action, other sensors may also detect similar posture changes or movements using modalities separate from the ECAP. However, in other examples, external programmer150receives a signal indicating a compound action potential in the target tissue of patient105and transmits a notification to IMD110. The control stimulation parameters and the target ECAP characteristic values may be initially set at the clinic but may be set and/or adjusted at home by patient105. Once the target ECAP characteristic values are set, the example techniques allow for automatic adjustment of therapy pulse parameters to maintain consistent volume of neural activation and consistent perception of therapy for the patient when the electrode-to-neuron distance changes. The ability to change the stimulation parameter values may also allow the therapy to have long term efficacy, with the ability to keep the intensity of the stimulation (e.g., as indicated by the ECAP) consistent by comparing the measured ECAP values to the target ECAP characteristic value. IMD110may perform these changes without intervention by a physician or patient105. In some examples, the system changes the target ECAP characteristic value over a period of time. The system may be programmed to change the target ECAP characteristic in order to adjust the intensity of informed pulses to provide varying sensations to the patient (e.g., increase or decrease the volume of neural activation). In one example, a system may be programmed to oscillate a target ECAP characteristic value between a maximum target ECAP characteristic value and a minimum target ECAP characteristic value at a predetermined frequency to provide a sensation to the patient that may be perceived as a wave or other sensation that may provide therapeutic relief for the patient. The maximum target ECAP characteristic value, the minimum target ECAP characteristic value, and the predetermined frequency may be stored in the storage device of IMD110and may be updated in response to a signal from external programmer150(e.g., a user request to change the values stored in the storage device of IMD110). In other examples, the target ECAP characteristic value may be programed to steadily increase or steadily decrease to a baseline target ECAP characteristic value over a period of time. In other examples, external programmer150may program the target ECAP characteristic value to automatically change over time according to other predetermined functions or patterns. In other words, the target ECAP characteristic value may be programmed to change incrementally by a predetermined amount or predetermined percentage, the predetermined amount or percentage being selected according to a predetermined function (e.g., sinusoid function, ramp function, exponential function, logarithmic function, or the like). Increments in which the target ECAP characteristic value is changed may be changed for every certain number of pulses or a certain unit of time. Although the system may change the target ECAP characteristic value, received ECAP signals may still be used by the system to adjust one or more parameter values of the informed pulses and/or control pulses in order to meet the target ECAP characteristic value. In some examples, IMD110includes stimulation generation circuitry configured to deliver electrical stimulation therapy to a patient, where the electrical stimulation therapy includes a plurality of informed pulses. Additionally, the stimulation generation circuitry of IMD110may be configured to deliver a plurality of control pulses, where the plurality of control pulses is interleaved with at least some informed pulses of the plurality of informed pulses. In some examples, IMD110includes sensing circuitry configured to detect a plurality of ECAPs, where the sensing circuitry is configured to detect each ECAP of the plurality of ECAPs after a control pulse of the plurality of control pulses and prior to a subsequent therapy pulse of the plurality of informed pulses. Even though the plurality of ECAPs may be received by IMD110based on IMD110delivering the plurality of control pulses (e.g., the plurality of control pulses may evoke the plurality of ECAPs received by IMD110), the plurality of ECAPs may indicate an efficacy of the plurality of informed pulses. In other words, although the plurality of ECAPs might, in some cases, not be evoked by the plurality of informed pulses themselves, the plurality of ECAPs may still reveal one or more properties of the plurality of informed pulses or one or more effects of the plurality of informed pulses on patient105. In some examples, the plurality of informed pulses are delivered by IMD110at above a perception threshold, where patient105is able to perceive the plurality of informed pulses delivered at above the perception threshold. In other examples, the plurality of informed pulses are delivered by IMD110at below a perception threshold, where the patient105not able to perceive the plurality of informed pulses delivered at below the perception threshold. IMD110may include processing circuitry which, in some examples, is configured to process the plurality of ECAPs received by the sensing circuitry of IMD110. For example, the processing circuitry of IMD110is configured to determine if a parameter of a first ECAP is greater than a threshold parameter value. The processing circuitry may monitor a characteristic value of each ECAP of the plurality of ECAPs and the first ECAP may be the first ECAP of the plurality of ECAPs recorded by IMD110that exceeds the threshold characteristic value. In some examples, the characteristic monitored by IMD110may be an ECAP amplitude. The ECAP amplitude may, in some examples, be given by a voltage difference between an N1 ECAP peak and a P2 ECAP peak. More description related to the N1 ECAP peak, the N2 ECAP peak, and other ECAP peaks may be found below in theFIG.4description. In other examples, IMD110may monitor another characteristic or more than one characteristic of the plurality of ECAPs, such as current amplitude, slope, slew rate, ECAP frequency, ECAP duration, or any combination thereof. In some examples where the characteristic includes an ECAP amplitude, the threshold ECAP characteristic value may be selected from a range of approximately 5 microvolts (μV) to approximately 30 μV. These characteristics may be stored as ECAP information in a temporary memory and may, in response to receiving a trigger signal, be stored by IMD110in a long-term memory for later analysis and/or transmission to another device. If the processing circuitry of IMD110determines that the characteristic of the first ECAP is greater than the threshold ECAP characteristic value, the processing circuitry may decrement (or reduce) a parameter of a set of informed pulses delivered by the stimulation generation circuitry after the first ECAP. In some examples, in order to decrement the parameter of the set of informed pulses, IMD110may decrease a current amplitude of each therapy pulse of each consecutive therapy pulse of the set of informed pulses by a current amplitude value. In other examples, in order to decrement the parameter of the set of informed pulses, IMD110may decrease a magnitude of a parameter (e.g., voltage) other than current. Since the plurality of ECAPs may indicate some effects of the therapy delivered by IMD110on patient105, IMD110may decrement the parameter of the set of informed pulses in order to improve the therapy delivered to patient105. In some cases, ECAPs received by IMD110exceeding the threshold ECAP characteristic value may indicate to IMD110that one or more of leads130have moved closer to the target tissue (e.g., spinal cord120) of patient105. In these cases, if therapy delivered to spinal cord120is maintained at present levels, patient105may experience transient overstimulation since the distance between leads130and the target tissue of patient105is a factor in determining the effects of electrical stimulation therapy on patient105. Consequently, decrementing the first set of informed pulses based on determining that the first ECAP exceeds the threshold ECAP characteristic value may prevent patient105from experiencing transient overstimulation due to the electrical stimulation therapy delivered by IMD110. After determining that the first ECAP exceeds the threshold ECAP characteristic value, the processing circuitry of IMD110may continue to monitor the plurality of ECAPs detected by the sensing circuitry. In some examples, the processing circuitry of IMD110may identify a second ECAP which occurs after the first ECAP, where a characteristic of the second ECAP is less than the threshold ECAP characteristic value. The second ECAP may, in some cases, be a leading ECAP occurring after the first ECAP which includes a characteristic value less than the threshold ECAP characteristic value. In other words, each ECAP occurring between the first ECAP and the second ECAP may include a characteristic value greater than or equal to the threshold ECAP characteristic value. In this manner, since IMD110may decrement the informed pulses delivered to patient105between the first ECAP and the second ECAP, decreasing a risk that patient105experiences transient overstimulation during a period of time extending between the reception of the first ECAP and the reception of the second ECAP. Based on the characteristic of the second ECAP being less than the threshold ECAP characteristic value, the processing circuitry of IMD110may increment a parameter of a second set of informed pulses delivered by the stimulation generation circuitry after the second ECAP. As described herein, system100may include a memory and processing circuitry. For example, IMD110and/or external programmer150may include some or all of the processing circuitry configured to perform various functions. System100may be configured to receive ECAP information, wherein the ECAP information comprises information from a plurality of ECAP signals, receive a trigger signal requesting long-term storage of at least a portion of the ECAP information in the memory, and responsive to receiving the trigger signal, store the at least portion of the ECAP information in the memory. Stimulation generation circuitry of IMD110may be configured to deliver electrical stimulation to a patient, wherein the electrical stimulation therapy comprises a plurality of stimulation pulses, and sensing circuitry of IMD110or another device may be configured to sense the plurality of ECAP signals. The sensing circuitry may be configured to sense each ECAP signal of the plurality of ECAP signals elicited by a respective stimulation pulse of the plurality of stimulation pulses, and the processing circuitry may be configured to receive the ECAP signals from the sensing circuitry as the ECAP information. The ECAP information may include at least one characteristic value representing respective ECAP signals of the plurality of ECAP signals. The characteristic value may be at least one of an amplitude value, a slope value, or an area under peak value indicative of a respective ECAP signal. Since the ECAP signal represents the action potential from a plurality of nerves, where a stronger ECAP signal indicates activation of a greater number of nerves, the characteristic value may similarly be indicative of the number of activated nerves from the delivery of a stimulation pulse. In addition, or alternatively, the ECAP information may include a plurality of waveforms representing respective ECAP signals of the plurality of ECAP signals. This waveform may be analog or a digitized representation of an analog waveform of each ECAP signal. In this manner, the waveform being stored may represent the sensed ECAP signal from tissue over time as opposed to a value calculated from the waveform (such as an amplitude between two peaks of the ECAP waveform). In this manner, the ECAP information may include any type of information or data representing one or more sensed ECAP signals. Storing waveform information may be a higher fidelity type of information when compared to other characteristic values representing the ECAP signal. System100may take different actions in response to receiving a trigger signal that requests long-term storage of at least a portion of the ECAP information. For example, system100may initially store received ECAP information in a temporary memory. In response to receiving the trigger signal, system100may store the ECAP information initially stored in the temporary memory in a long-term memory. System100may thus transfer the ECAP information to the long-term memory. In this manner, the temporary memory may act like a buffer so that system100can store, if needed, ECAP information received prior to the trigger signal in the long-term memory. In some examples, system100may be configured to delete the ECAP information stored in the temporary memory in response to a predetermined period of time elapsing. The predetermined period of time may be selected to be between a fraction of a second (e.g., one millisecond or one microsecond) and several hours, for example. In one example, the predetermined period of time may be between 20 seconds and 5 minutes. In other words, the temporary memory may have a buffer length from approximately 20 seconds to 5 minutes. For example, the buffer length may be 30 seconds or 60 seconds. In another example, the buffer may be less than or equal to two minutes. Other buffer lengths shorter or longer may be used in other examples. This time may elapse prior to receiving the trigger signal. System100may thus delete ECAP information from the temporary memory when not needed. These types of temporary memories may be similar to a first-in-first-out type of memory or buffer. In other some examples, system100may be configured to, responsive to receiving the trigger signal, control the sensing circuitry to increase a rate at which the sensing circuitry senses subsequent ECAP signals and store subsequent ECAP information comprising the subsequent ECAP signals in the memory. In this manner, system100may also increase the rate at which the stimulation generation circuitry generates stimulation pulses from which the increased rate of ECAP signals are sensed because typically one ECAP signal is sensed from one respective stimulation pulse. By increasing the rate of ECAP signal detection, system100may increase the fidelity of ECAP information in response to receiving the trigger signal. The trigger signal may take any number of forms. For example, the trigger signal may include a request from an external device (e.g., external programmer150such as a patient programmer or clinician programmer) to store the ECAP information. A user may interact with the external device and request that the external device transmit a request to IMD110to store the ECAP information for long-term storage. In this manner, IMD110may receive the trigger signal from the external device. In other examples, IMD110may receive a user request directly from the user, such as a housing tap in which the patient taps the housing of IMD110through the skin of the patient. IMD110may include one or more accelerometers or other motion detection or presence detection devices configured to detect tapping on the housing. Therefore, IMD110may be configured to receive or detect the housing tap by receiving accelerometer data from an accelerometer within the housing of IMD110and determining that the accelerometer data indicates the user tapped IMD110. IMD110may employ a specific tapping algorithm to differentiate the tap from other movements or motions. For example, IMD110may need to detect a specific pattern of tapping, number of taps, magnitude of one or more tapes, or any other type of housing tap that would be different from other routine bumps and movements to IMD110. In other examples, the trigger signal includes an indication that a characteristic of one ECAP signal of the plurality of ECAP signals exceeds a threshold. The threshold may indicate that a particular movement of patient105(e.g., coughing, sneezing, laughing, bending over, etc.) may have caused an undesirable sensation from stimulation, where further analysis of the ECAP signals may be desired. In some examples, the trigger signal may include an indication that a user changed one or more stimulation parameter values defining electrical stimulation deliverable to a patient. User changing of a stimulation parameter may indicate ineffective therapy and/or undesirable sensations felt by patient105, so ECAP information related to such an event may be beneficial for further analysis into what may have caused the patient to change the stimulation parameter value. In some examples, system100may select specific portions of ECAP information according to the trigger signal. For example, in response to receiving the trigger signal, system100may select the at least portion of the ECAP information representative of one or more ECAP signals of the plurality of ECAP signals sensed between an initial time and a final time. The initial time may occur at a first period of time prior to receiving the trigger signal, and the final time may occur at a second period of time after receiving the trigger signal. In this manner, system100may store, in the long-term memory, ECAP information recorded and received prior to the trigger event and ECAP information following the trigger event. The resulting stored ECAP information may include information representative of ECAP signals sensed before and after the event which caused the trigger signal to be generated. In this manner, the system may enable capture of ECAP information leading up to an identified trigger signal. For a user, this may result in the patient being able to record ECAP information that was captured prior to feeling a sensation that causes the patient to desire to store the ECAP information and/or prior to the patient being able to provide the input request to the programmer. Additionally, the process prevents the system or user from being able to anticipate which ECAP information may be relevant prior to some event occurring. In some examples, system100may move ECAP information stored in the temporary memory before receiving the trigger signal into the long-term memory and then store ECAP information received after the trigger signal directly into the long-term memory. In other examples, system100may store all received ECAP information in the temporary memory first and then move that ECAP information representative of ECAP signals sensed between the initial time and the final time to the long-term memory. In some examples, system100may add a marker representative of the trigger signal to the at least portion of the ECAP information stored in the memory. The marker may indicate a time of the trigger signal with respect to sensed ECAP signals of the ECAP information. In addition, or alternatively, the marker may include information identifying the type of event (e.g., a user request, an above-threshold ECAP characteristic value, etc.) that caused the trigger signal. In this manner, system100may be configured to, or enable a user to, sort trigger signals or associated data based on the time that the trigger signal occurred (e.g., trigger signals within a time period of a specified time or within a specified time range). In some examples, system100may be configured to store data based on the type of trigger signal that caused the data to be stored (e.g., provide data associated with a user or system specified type of trigger signal). In some examples, IMD110may analyze the ECAP information stored in long-term memory. In other examples, IMD110may include communication circuitry configured to transmit the stored ECAP information to an external device, such as programmer150. IMD110may transmit any ECAP information stored in long-term memory during a communication session with the external device or at the request of the external device. The external device, such as programmer150, may include a display such that the external device is configured to present, via the display, one or more representations of the stored ECAP information. For example, the external device may display graphs of the waveforms from the ECAP information over time, characteristic values over time, markers associated with the trigger signals, or any other such representations of the ECAP information. FIG.2is a block diagram illustrating an example configuration of components of IMD200, in accordance with one or more techniques of this disclosure. IMD200may be an example of IMD110ofFIG.1. In the example shown inFIG.2, IMD200includes stimulation generation circuitry202, switch circuitry204, sensing circuitry206, communication circuitry208, processing circuitry210, storage device212, sensor(s)222, and power source224. In the example shown inFIG.2, storage device212stores therapy stimulation programs214and ECAP storage instructions216in separate memories within storage device212or separate areas within storage device212. Storage device212also includes temporary memory218(e.g., a rolling buffer) and long-term memory220, which may be on the same or physically separate memories. Each stored therapy stimulation program of therapy stimulation programs214defines values for a set of electrical stimulation parameters (e.g., a stimulation parameter set), such as a stimulation electrode combination, electrode polarity, current or voltage amplitude, pulse width, pulse rate, and pulse shape. The ECAP storage instructions216include instructions regarding receiving and storing ECAP information, such as how long ECAP information is stored, when to store ECAP information in temporary memory218, when to store ECAP information in long-term memory220, when to change ECAP sensing rates, or any other aspects related to the sensing and storage of ECAP information. Temporary memory218may include a rolling buffer (e.g., first-in-first-out) memory that stores all ECAP information received from sensing circuitry206. Long-term memory220may include the ECAP information that processing circuitry210has selected, based on criteria such as when a trigger signal is received, for long-term storage and/or eventual transmission to another device (e.g., programmer150). Accordingly, in some examples, stimulation generation circuitry202generates electrical stimulation signals in accordance with the electrical stimulation parameters noted above. Other ranges of stimulation parameter values may also be useful and may depend on the target stimulation site within patient105. While stimulation pulses are described, stimulation signals may be of any form, such as continuous-time signals (e.g., sine waves) or the like. Switch circuitry204may include one or more switch arrays, one or more multiplexers, one or more switches (e.g., a switch matrix or other collection of switches), or other electrical circuitry configured to direct stimulation signals from stimulation generation circuitry202to one or more of electrodes232,234, or directed sensed signals from one or more of electrodes232,234to sensing circuitry206. In other examples, stimulation generation circuitry202and/or sensing circuitry206may include sensing circuitry to direct signals to and/or from one or more of electrodes232,234, which may or may not also include switch circuitry204. Sensing circuitry206monitors signals from any combination of electrodes232,234. In some examples, sensing circuitry206includes one or more amplifiers, filters, and analog-to-digital converters. Sensing circuitry206may be used to sense physiological signals, such as ECAPs. In some examples, sensing circuitry206detects ECAPs from a particular combination of electrodes232,234. In some cases, the particular combination of electrodes for sensing ECAPs includes different electrodes than a set of electrodes232,234used to deliver stimulation pulses. Alternatively, in other cases, the particular combination of electrodes used for sensing ECAPs includes at least one of the same electrodes as a set of electrodes used to deliver stimulation pulses to patient105. Sensing circuitry206may provide signals to an analog-to-digital converter, for conversion into a digital signal for processing, analysis, storage, or output by processing circuitry210. Communication circuitry208supports wireless communication between IMD200and an external programmer (not shown inFIG.2) or another computing device under the control of processing circuitry210. Processing circuitry210of IMD200may receive, as updates to programs, values for various stimulation parameters such as amplitude and electrode combination, from the external programmer via communication circuitry208. Updates to the therapy stimulation programs214and ECAP test stimulation programs216may be stored within storage device212. Communication circuitry208in IMD200, as well as telemetry circuits in other devices and systems described herein, such as the external programmer, may accomplish communication by radiofrequency (RF) communication techniques. In addition, communication circuitry208may communicate with an external medical device programmer (not shown inFIG.2) via proximal inductive interaction of IMD200with the external programmer. The external programmer may be one example of external programmer150ofFIG.1. Accordingly, communication circuitry208may send information to the external programmer on a continuous basis, at periodic intervals, or upon request from IMD110or the external programmer. Processing circuitry210may include any one or more of a microprocessor, a controller, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), discrete logic circuitry, or any other processing circuitry configured to provide the functions attributed to processing circuitry210herein may be embodied as firmware, hardware, software or any combination thereof. Processing circuitry210controls stimulation generation circuitry202to generate stimulation signals according to therapy stimulation programs214and ECAP test stimulation programs216stored in storage device212to apply stimulation parameter values specified by one or more of programs, such as amplitude, pulse width, pulse rate, and pulse shape of each of the stimulation signals. In the example shown inFIG.2, the set of electrodes232includes electrodes232A,232B,232C, and232D, and the set of electrodes234includes electrodes234A,234B,234C, and234D. In other examples, a single lead may include all eight electrodes232and234along a single axial length of the lead. Processing circuitry210also controls stimulation generation circuitry202to generate and apply the stimulation signals to selected combinations of electrodes232,234. In some examples, stimulation generation circuitry202includes a switch circuit (instead of, or in addition to, switch circuitry204) that may couple stimulation signals to selected conductors within leads230, which, in turn, deliver the stimulation signals across selected electrodes232,234. Such a switch circuit may be a switch array, switch matrix, multiplexer, or any other type of switching circuit configured to selectively couple stimulation energy to selected electrodes232,234and to selectively sense bioelectrical neural signals of a spinal cord of the patient (not shown inFIG.2) with selected electrodes232,234. In other examples, however, stimulation generation circuitry202does not include a switch circuit and switch circuitry204does not interface between stimulation generation circuitry202and electrodes232,234. In these examples, stimulation generation circuitry202includes a plurality of pairs of voltage sources, current sources, voltage sinks, or current sinks connected to each of electrodes232,234such that each pair of electrodes has a unique signal circuit. In other words, in these examples, each of electrodes232,234is independently controlled via its own signal circuit (e.g., via a combination of a regulated voltage source and sink or regulated current source and sink), as opposed to switching signals between electrodes232,234. Electrodes232,234on respective leads230may be constructed of a variety of different designs. For example, one or both of leads230may include one or more electrodes at each longitudinal location along the length of the lead, such as one electrode at different perimeter locations around the perimeter of the lead at each of the locations A, B, C, and D. In one example, the electrodes may be electrically coupled to stimulation generation circuitry202, e.g., via switch circuitry204and/or switching circuitry of the stimulation generation circuitry202, via respective wires that are straight or coiled within the housing of the lead and run to a connector at the proximal end of the lead. In another example, each of the electrodes of the lead may be electrodes deposited on a thin film. The thin film may include an electrically conductive trace for each electrode that runs the length of the thin film to a proximal end connector. The thin film may then be wrapped (e.g., a helical wrap) around an internal member to form the lead230. These and other constructions may be used to create a lead with a complex electrode geometry. Although sensing circuitry206is incorporated into a common housing with stimulation generation circuitry202and processing circuitry210inFIG.2, in other examples, sensing circuitry206may be in a separate housing from IMD200and may communicate with processing circuitry210via wired or wireless communication techniques. In some examples, one or more of electrodes232and234are suitable for sensing the ECAPs. For instance, electrodes232and234may sense the voltage amplitude of a portion of the ECAP signals, where the sensed voltage amplitude is a characteristic the ECAP signal. Storage device212may be configured to store information within IMD200during operation. Storage device212may include a computer-readable storage medium or computer-readable storage device. In some examples, storage device212includes one or more of a short-term memory (e.g., temporary memory218) or a long-term memory (e.g., long-term memory220). Storage device212may include, for example, random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), ferroelectric random access memories (FRAM), magnetic discs, optical discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable memories (EEPROM). In some examples, storage device212is used to store data indicative of instructions for execution by processing circuitry210. As discussed above, storage device212is configured to store therapy stimulation programs214and ECAP storage instructions216. In some examples, stimulation generation circuitry202may be configured to deliver electrical stimulation therapy to patient105. The electrical stimulation therapy may, in some cases, include a plurality of informed pulses. Additionally, stimulation generation circuitry202may be configured to deliver a plurality of control pulses, where the plurality of control pulses is interleaved with at least some informed pulses of the plurality of informed pulses. Stimulation generation circuitry may deliver the plurality of informed pulses and the plurality of control pulses to target tissue (e.g., spinal cord120) of patient105via electrodes232,234of leads230. By delivering such informed pulses and control pulses, stimulation generation circuitry202may evoke responsive ECAPs in the target tissue, the responsive ECAPs propagating through the target tissue before arriving back at electrodes232,234. In some examples, a different combination of electrodes232,234may sense responsive ECAPs than a combination of electrodes232,234that delivers informed pulses and a combination of electrodes232,234that delivers control pulses. Sensing circuitry206may be configured to detect the responsive ECAPs via electrodes232,234and leads230. In other examples, stimulation generation circuitry202may be configured to deliver a plurality of control pulses, without any informed pulses, when control pulses also provide therapeutic effect for the patient. Processing circuitry210may, in some cases, direct sensing circuitry206to continuously monitor for ECAPs. In other cases, processing circuitry210may direct sensing circuitry206to may monitor for ECAPs based on signals from sensor(s)222. For example, processing circuitry210may activate sensing circuitry206based on an activity level of patient105exceeding an activity level threshold (e.g., an accelerometer signal of acceleration sensor225rises above a threshold). Activating and deactivating sensing circuitry206may, in some examples, extend a battery life of power source224. In other examples, processing circuitry210may determine and store acceleration data derived from acceleration sensor225that represents posture states and/or activity of the patient. Processing circuitry210may correlate the acceleration data with the ECAP information for temporary storage or long-term storage in response to receiving the trigger signal herein. In this manner, the acceleration data may represent posture states and/or activity of the patient that corresponds to the same time at which the ECAP information was collected for the patient. The acceleration data and the ECAP information may thus be temporally aligned to represent aspects of the patient at the same time. In addition, processing circuitry210may store a time stamp or other indicating data along with the ECAP information and/or the acceleration data in order to temporally align the ECAP information and acceleration data and/or other events in time. The combination of ECAP information and acceleration data may help a clinician or patient to identify movements that result in uncomfortable stimulation or ineffective therapy. In some examples, processing circuitry210determines if a characteristic of a first ECAP is greater than a threshold ECAP characteristic value. The threshold ECAP characteristic value may be stored in storage device212. In some examples, the characteristic of the first ECAP is a voltage amplitude of the first ECAP. In some such examples, the threshold ECAP characteristic value is selected from a range of approximately 10 microvolts (μV) to approximately 20 μV. In other examples, processing circuitry210determines if another characteristic (e.g., ECAP current amplitude, ECAP slew rate, area underneath the ECAP, ECAP slope, or ECAP duration) of the first ECAP is greater than the threshold ECAP characteristic value. If processing circuitry210determines that the characteristic of the first ECAP is greater than the threshold ECAP characteristic value, processing circuitry210is configured to activate a decrement mode, altering at least one parameter of each therapy pulse of a set of informed pulses delivered by IMD200after the first ECAP is sensed by sensing circuitry206. Additionally, while the decrement mode is activated, processing circuitry210may change at least one parameter of each control pulse of a set of control pulses delivered by IMD200after the first ECAP is sensed by sensing circuitry206. In some examples, the at least one parameter of the informed pulses and the at least one parameter of the control pulses adjusted by processing circuitry210during the decrement mode includes a stimulation current amplitude. In some such examples, during the decrement mode, processing circuitry210decreases an electrical current amplitude of each consecutive stimulation pulse (e.g., each therapy pulse and each control pulse) delivered by IMD200. In other examples, the at least one parameter of the stimulation pulses adjusted by processing circuitry210during the decrement mode include any combination of electrical current amplitude, electrical voltage amplitude, slew rate, pulse shape, pulse frequency, or pulse duration. In the example illustrated byFIG.2, the decrement mode is stored in storage device212as a part of control policy213. The decrement mode may include a list of instructions which enable processing circuitry210to adjust parameters of stimulation pulses according to a function. In some examples, when the decrement mode is activated, processing circuitry210decreases a parameter (e.g., an electrical current) of each consecutive therapy pulse and each consecutive control pulse according to a linear function. In other examples, when the decrement mode is activated, processing circuitry210decreases a parameter (e.g., an electrical current) of each consecutive therapy pulse and each consecutive control pulse according to an exponential function, a logarithmic function, or a piecewise function. While the decrement mode is activated, sensing circuitry206may continue to monitor responsive ECAPs. In turn, sensing circuitry206may detect ECAPs responsive to control pulses delivered by IMD200. Throughout the decrement mode, processing circuitry may monitor ECAPs responsive to stimulation pulses. Processing circuitry210may determine if a characteristic of a second ECAP is less than the threshold ECAP characteristic value. The second ECAP may, in some cases, be the leading ECAP occurring after the first ECAP which is less than the threshold ECAP characteristic value. In other words, each ECAP recorded by sensing circuitry206between the first ECAP and the second ECAP is greater than or equal to the threshold ECAP characteristic value. Based on the characteristic of the second ECAP being less than the threshold ECAP characteristic value, processing circuitry210may deactivate the decrement mode and activate an increment mode, thus altering at least one parameter of each therapy pulse of a set of informed pulses delivered by IMD200after the second ECAP is sensed by sensing circuitry206. Additionally, while the increment mode is activated, processing circuitry210may change at least one parameter of each control pulse of a set of control pulses delivered by IMD200after the second ECAP is sensed by sensing circuitry206. In some examples, the at least one parameter of the informed pulses and the at least one parameter of the control pulses adjusted by processing circuitry210during the increment mode includes a stimulation current amplitude. In some such examples, during the increment mode, processing circuitry210increases an electrical current amplitude of each consecutive stimulation pulse (e.g., each therapy pulse and each control pulse) delivered by IMD200. In other examples, the at least one parameter of the stimulation pulses adjusted by processing circuitry210during the increment mode include any combination of electrical current amplitude, electrical voltage amplitude, slew rate, pulse shape, pulse frequency, or pulse duration. In the example illustrated byFIG.2, the increment mode is stored in storage device212as a part of control policy213. The increment mode may include a list of instructions which enable processing circuitry210to adjust parameters of stimulation pulses according to a function. In some examples, when the increment mode is activated, processing circuitry210increases a parameter (e.g., an electrical current) of each consecutive therapy pulse and each consecutive control pulse according to a linear function. In other examples, when the increment mode is activated, processing circuitry210increases a parameter (e.g., an electrical current) of each consecutive therapy pulse and each consecutive control pulse according to a non-linear function, such as an exponential function, a logarithmic function, or a piecewise function. While the increment mode is activated, sensing circuitry206may continue to monitor responsive ECAPs. In turn, sensing circuitry206may detect ECAPs responsive to control pulses delivered by IMD200. Processing circuitry210may complete the increment mode such that the one or more parameters of the stimulation pulses return to baseline parameter values of stimulation pulses delivered before processing circuitry210activates the decrement mode (e.g., before sensing circuitry206detects the first ECAP). By first decrementing and subsequently incrementing stimulation pulses in response to ECAPs exceeding a threshold ECAP characteristic value, processing circuitry210may prevent patient105from experiencing transient overstimulation or decrease a severity of transient overstimulation experienced by patient105. Although, in some examples, sensing circuitry206senses ECAPs which occur in response to control pulses delivered according to ECAP test stimulation programs216, in other examples, sensing circuitry206senses ECAPs which occur in response to informed pulses delivered according to therapy stimulation programs214. The techniques of this disclosure may enable IMD200to toggle the decrement mode and the increment mode using any combination of ECAPs corresponding to informed pulses and ECAPs corresponding to control pulses. Sensor(s)222may include one or more sensing elements that sense values of a respective patient parameter. As described, electrodes232and234may be the electrodes that sense the characteristic value of the ECAP. Sensor(s)222may include one or more accelerometers (such as acceleration sensor225), optical sensors, chemical sensors, temperature sensors, pressure sensors, or any other types of sensors. Sensor(s)222may output patient parameter values that may be used as feedback to control delivery of therapy. For example, sensor(s)222may indicate patient activity, and processing circuitry210may increase the frequency of control pulses and ECAP sensing in response to detecting increased patient activity. In one example, processing circuitry210may initiate control pulses and corresponding ECAP sensing in response to a signal from sensor(s)222indicating that patient activity has exceeded an activity threshold. Conversely, processing circuitry210may decrease the frequency of control pulses and ECAP sensing in response to detecting decreased patient activity. For example, in response to sensor(s)222no longer indicating that the sensed patient activity exceeds a threshold, processing circuitry210may suspend or stop delivery of control pulses and ECAP sensing. In this manner, processing circuitry210may dynamically deliver control pulses and sense ECAP signals based on patient activity to reduce power consumption of the system when the electrode-to-neuron distance is not likely to change and increase system response to ECAP changes when electrode-to-neuron distance is likely to change. IMD200may include additional sensors within the housing of IMD200and/or coupled via one of leads130or other leads. In addition, IMD200may receive sensor signals wirelessly from remote sensors via communication circuitry208, for example. In some examples, one or more of these remote sensors may be external to patient (e.g., carried on the external surface of the skin, attached to clothing, or otherwise positioned external to patient105). In some examples, signals from sensor(s)222indicate a position or body state (e.g., sleeping, awake, sitting, standing, or the like), and processing circuitry210may select target ECAP characteristic values according to the indicated position or body state. Power source224is configured to deliver operating power to the components of IMD200. Power source224may include a battery and a power generation circuit to produce the operating power. In some examples, the battery is rechargeable to allow extended operation. In some examples, recharging is accomplished through proximal inductive interaction between an external charger and an inductive charging coil within IMD200. Power source224may include any one or more of a plurality of different battery types, such as nickel cadmium batteries and lithium ion batteries. FIG.3is a block diagram illustrating an example configuration of components of external programmer300, in accordance with one or more techniques of this disclosure. External programmer300may be an example of external programmer150ofFIG.1. Although external programmer300may generally be described as a hand-held device, external programmer300may be a larger portable device or a more stationary device. In addition, in other examples, external programmer300may be included as part of an external charging device or include the functionality of an external charging device. As illustrated inFIG.3, external programmer300may include processing circuitry352, storage device354, user interface356, communication circuitry358, and power source360. Storage device354may store instructions that, when executed by processing circuitry352, cause processing circuitry352and external programmer300to provide the functionality ascribed to external programmer300throughout this disclosure. Each of these components, circuitry, or modules, may include electrical circuitry that is configured to perform some, or all of the functionality described herein. For example, processing circuitry352may include processing circuitry configured to perform the processes discussed with respect to processing circuitry352. In general, external programmer300includes any suitable arrangement of hardware, alone or in combination with software and/or firmware, to perform the techniques attributed to external programmer300, and processing circuitry352, user interface356, and communication circuitry358of external programmer300. In various examples, external programmer300may include one or more processors, such as one or more microprocessors, DSPs, ASICs, FPGAs, or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components. External programmer300also, in various examples, may include a storage device354, such as RAM, ROM, PROM, EPROM, EEPROM, flash memory, a hard disk, a CD-ROM, including executable instructions for causing the one or more processors to perform the actions attributed to them. Moreover, although processing circuitry352and communication circuitry358are described as separate modules, in some examples, processing circuitry352and communication circuitry358are functionally integrated. In some examples, processing circuitry352and communication circuitry358correspond to individual hardware units, such as ASICs, DSPs, FPGAs, or other hardware units. Storage device354(e.g., a storage device) may store instructions that, when executed by processing circuitry352, cause processing circuitry352and external programmer300to provide the functionality ascribed to external programmer300throughout this disclosure. For example, storage device354may include instructions that cause processing circuitry352to obtain a parameter set from memory, select a spatial electrode movement pattern, or receive a user input and send a corresponding command to IMD200, or instructions for any other functionality. In addition, storage device354may include a plurality of programs, where each program includes a parameter set that defines stimulation pulses, such as control pulses and/or informed pulses. Storage device354may also store data received from a medical device (e.g., IMD110). For example, storage device354may store ECAP related data recorded at a sensing module of the medical device, and storage device354may also store data from one or more sensors of the medical device. This ECAP related data may include ECAP information transmitted from an implantable medical device, such as IMD110. User interface356may include a button or keypad, lights, a speaker for voice commands, a display, such as a liquid crystal (LCD), light-emitting diode (LED), or organic light-emitting diode (OLED). In some examples the display includes a touch screen. User interface356may be configured to display any information related to the delivery of electrical stimulation, identified patient behaviors, sensed patient parameter values, patient behavior criteria, or any other such information. In addition, as described herein, processing circuitry352may control user interface356to present graphical representations of ECAP information transmitted by IMD110. User interface356may also receive user input via user interface356. The input may be, for example, in the form of pressing a button on a keypad or selecting an icon from a touch screen. The input may request starting or stopping electrical stimulation, the input may request a new spatial electrode movement pattern or a change to an existing spatial electrode movement pattern, of the input may request some other change to the delivery of electrical stimulation. Communication circuitry358may support wireless communication between the medical device and external programmer300under the control of processing circuitry352. Communication circuitry358may also be configured to communicate with another computing device via wireless communication techniques, or direct communication through a wired connection. In some examples, communication circuitry358provides wireless communication via an RF or proximal inductive medium. In some examples, communication circuitry358includes an antenna, which may take on a variety of forms, such as an internal or external antenna. Examples of local wireless communication techniques that may be employed to facilitate communication between external programmer300and IMD110include RF communication according to the 802.11 or Bluetooth® specification sets or other standard or proprietary telemetry protocols. In this manner, other external devices may be capable of communicating with external programmer300without needing to establish a secure wireless connection. As described herein, communication circuitry358may be configured to transmit a spatial electrode movement pattern or other stimulation parameter values to IMD110for delivery of electrical stimulation therapy. In some examples, selection of stimulation parameters or therapy stimulation programs are transmitted to the medical device for delivery to a patient (e.g., patient105ofFIG.1). In other examples, the therapy may include medication, activities, or other instructions that patient105must perform themselves or a caregiver perform for patient105. In some examples, external programmer300provides visual, audible, and/or tactile notifications that indicate there are new instructions. External programmer300requires receiving user input acknowledging that the instructions have been completed in some examples. According to the techniques of the disclosure, user interface356of external programmer300may also receive user input associated with a trigger signal to be transmitted to IMD110for storing ECAP information in long-term memory. For example the user input may explicitly request ECAP information that is being recorded at that time. In other examples, the user input may indicate that an event has occurred, such as a patient movement (e.g., sneeze, cough, laugh, posture change, etc.) that caused an undesired stimulation sensation or loss of therapy or any other situation related to stimulation therapy (e.g., any sensation or loss of therapy that may be of interest to the patient or a clinician). In this manner, the patient may provide the user input any time the patient feels a sensation that is undesirable, “funny,” or in any way may be of interest to the patient. Processing circuitry352may cause communication circuitry358to transmit the trigger signal to IMD110. Processing circuitry352may then receive, via communication circuitry358, stored ECAP information from IMD110. Power source360is configured to deliver operating power to the components of external programmer300. Power source360may include a battery and a power generation circuit to produce the operating power. In some examples, the battery is rechargeable to allow extended operation. Recharging may be accomplished by electrically coupling power source360to a cradle or plug that is connected to an alternating current (AC) outlet. In addition, recharging may be accomplished through proximal inductive interaction between an external charger and an inductive charging coil within external programmer300. In other examples, traditional batteries (e.g., nickel cadmium or lithium ion batteries) may be used. In addition, external programmer300may be directly coupled to an alternating current outlet to operate. The architecture of external programmer300illustrated inFIG.3is shown as an example. The techniques as set forth in this disclosure may be implemented in the example external programmer300ofFIG.3, as well as other types of systems not described specifically herein. Nothing in this disclosure should be construed so as to limit the techniques of this disclosure to the example architecture illustrated byFIG.3. FIG.4is a graph402of example evoked compound action potentials (ECAPs) sensed for respective stimulation pulses, in accordance with one or more techniques of this disclosure. As shown inFIG.4, graph402shows example ECAP signal404(dotted line) and ECAP signal406(solid line). In some examples, each of ECAP signals404and406are sensed from control pulses that were delivered from a guarded cathode, where the control pulses are bi-phasic pulses including an interphase interval between each positive and negative phase of the pulse. In some such examples, the guarded cathode includes stimulation electrodes located at the end of an 8-electrode lead (e.g., leads130ofFIG.1) while two sensing electrodes are provided at the other end of the 8-electrode lead. ECAP signal404illustrates the voltage amplitude sensed as a result from a sub-detection threshold stimulation pulse, or a stimulation pulse which results in no detectable ECAP. Peaks408of ECAP signal404are detected and represent the artifact of the delivered control pulse. However, no propagating signal is detected after the artifact in ECAP signal404because the control pulse was sub-detection stimulation threshold. In contrast to ECAP signal404, ECAP signal406(e.g., a waveform) represents the voltage amplitude detected from a supra-detection stimulation threshold control pulse. Peaks408of ECAP signal406are detected and represent the artifact of the delivered control pulse. After peaks408, ECAP signal406also includes peaks P1, N1, and P2, which are three typical peaks representative of propagating action potentials from an ECAP. The example duration of the artifact and peaks P1, N1, and P2 is approximately 1 millisecond (ms). When detecting the ECAP of ECAP signal406, different characteristics may be identified. For example, the characteristic of the ECAP may be the amplitude between N1 and P2. This N1-P2 amplitude may be easily detectable even if the artifact impinges on P1, a relatively large signal, and the N1-P2 amplitude may be minimally affected by electronic drift in the signal. In other examples, the characteristic of the ECAP used to control subsequent control pulses and/or informed pulses may be an amplitude of P1, N1, or P2 with respect to neutral or zero voltage. In some examples, the characteristic of the ECAP used to control subsequent control pulses or informed pulses is a sum of two or more of peaks P1, N1, or P2. In other examples, the characteristic of ECAP signal406may be the area under one or more of peaks P1, N1, and/or P2. In other examples, the characteristic of the ECAP may be a ratio of one of peaks P1, N1, or P2 to another one of the peaks. In some examples, the characteristic of the ECAP is a slope between two points in the ECAP signal, such as the slope between N1 and P2. In other examples, the characteristic of the ECAP may be the time between two points of the ECAP, such as the time between N1 and P2. The time between when the stimulation pulse is delivered and a point in the ECAP signal may be referred to as a latency of the ECAP and may indicate the types of fibers being captured by the stimulation pulse (e.g., a control pulse). ECAP signals with lower latency (i.e., smaller latency values) indicate a higher percentage of nerve fibers that have faster propagation of signals, whereas ECAP signals with higher latency (i.e., larger latency values) indicate a higher percentage of nerve fibers that have slower propagation of signals. Latency may also refer to the time between an electrical feature is detected at one electrode and then detected again at a different electrode. This time, or latency, is inversely proportional to the conduction velocity of the nerve fibers. Other characteristics of the ECAP signal may be used in other examples. The amplitude of the ECAP signal increases with increased amplitude of the control pulse, as long as the pulse amplitude is greater than threshold such that nerves depolarize and propagate the signal. The target ECAP characteristic (e.g., the target ECAP amplitude) may be determined from the ECAP signal detected from a control pulse when informed pulses are determined to deliver effective therapy to patient105. The ECAP signal thus is representative of the distance between the stimulation electrodes and the nerves appropriate for the stimulation parameter values of the informed pulses delivered at that time. Therefore, IMD110may attempt to use detected changes to the measured ECAP characteristic value to change therapy pulse parameter values and maintain the target ECAP characteristic value during therapy pulse delivery. FIG.5Ais a timing diagram500A illustrating an example of electrical stimulation pulses, respective stimulation signals, and respective sensed ECAPs, in accordance with one or more techniques of this disclosure. For convenience,FIG.5Ais described with reference to IMD200ofFIG.2. As illustrated, timing diagram500A includes first channel502, a plurality of stimulation pulses504A-504N (collectively “stimulation pulses504”), second channel506, a plurality of respective ECAPs508A-508N (collectively “ECAPs508”), and a plurality of stimulation signals509A-509N (collectively “stimulation signals509”). In some examples, stimulation pulses504may represent control pulses which are configured to elicit ECAPs508that are detectible by IMD200, but this is not required. Stimulation pulses504may represent any type of pulse that is deliverable by IMD200. In the example ofFIG.5A, IMD200can deliver therapy with control pulses instead of, or without, informed pulses. First channel502is a time/voltage (and/or current) graph indicating the voltage (or current) of at least one electrode of electrodes232,234. In one example, the stimulation electrodes of first channel502may be located on the opposite side of the lead as the sensing electrodes of second channel506. Stimulation pulses504may be electrical pulses delivered to the spinal cord of the patient by at least one of electrodes232,234, and stimulation pulses504may be balanced biphasic square pulses with an interphase interval. In other words, each of stimulation pulses504are shown with a negative phase and a positive phase separated by an interphase interval. For example, a stimulation pulse504may have a negative voltage for the same amount of time and amplitude that it has a positive voltage. It is noted that the negative voltage phase may be before or after the positive voltage phase. Stimulation pulses504may be delivered according to test stimulation programs216stored in storage device212of IMD200, and ECAP test stimulation programs may be updated according to user input via an external programmer and/or may be updated according to a signal from sensor(s)222. In one example, stimulation pulses504may have a pulse width of less than approximately 300 microseconds (e.g., the total time of the positive phase, the negative phase, and the interphase interval is less than 300 microseconds). In another example, stimulation pulses504may have a pulse width of approximately 100 μs for each phase of the bi-phasic pulse. As illustrated inFIG.5A, stimulation pulses504may be delivered via channel502. Delivery of stimulation pulses504may be delivered by leads230in a guarded cathode electrode combination. For example, if leads230are linear 8-electrode leads, a guarded cathode combination is a central cathodic electrode with anodic electrodes immediately adjacent to the cathodic electrode. Second channel506is a time/voltage (and/or current) graph indicating the voltage (or current) of at least one electrode of electrodes232,234. In one example, the electrodes of second channel506may be located on the opposite side of the lead as the electrodes of first channel502. ECAPs508may be sensed at electrodes232,234from the spinal cord of the patient in response to stimulation pulses504. ECAPs508are electrical signals which may propagate along a nerve away from the origination of stimulation pulses504. In one example, ECAPs508are sensed by different electrodes than the electrodes used to deliver stimulation pulses504. As illustrated inFIG.5A, ECAPs508may be recorded on second channel506. Stimulation signals509A,509B, and509N may be sensed by leads230and sensing circuitry206and may be sensed during the same period of time as the delivery of stimulation pulses504. Since the stimulation signals may have a greater amplitude and intensity than ECAPs508, any ECAPs arriving at IMD200during the occurrence of stimulation signals509might not be adequately sensed by sensing circuitry206of IMD200. However, ECAPs508may be sufficiently sensed by sensing circuitry206because each ECAP508, or at least a portion of ECAP508used as feedback for stimulation pulses504, falls after the completion of each a stimulation pulse504. As illustrated inFIG.5A, stimulation signals509and ECAPs508may be recorded on channel506. In some examples, ECAPs508may not follow respective stimulation signals509when ECAPs are not elicited by stimulation pulses504or the amplitude of ECAPs is too low to be detected (e.g., below the detection threshold). FIG.5Bis a timing diagram500B illustrating one example of electrical stimulation pulses, respective stimulation signals, and respective sensed ECAPs, in accordance with one or more techniques of this disclosure. For convenience,FIG.5Bis described with reference to IMD200ofFIG.2. As illustrated, timing diagram500B includes first channel510, a plurality of control pulses512A-512N (collectively “control pulses512”), second channel520, a plurality of informed pulses524A-524N (collectively “informed pulses524”) including passive recharge phases526A-526N (collectively “passive recharge phases526”), third channel530, a plurality of respective ECAPs536A-536N (collectively “ECAPs536”), and a plurality of stimulation signals538A-538N (collectively “stimulation signals538”). First channel510is a time/voltage (and/or current) graph indicating the voltage (or current) of at least one electrode of electrodes232,234. In one example, the stimulation electrodes of first channel510may be located on the opposite side of the lead as the sensing electrodes of third channel530. Control pulses512may be electrical pulses delivered to the spinal cord of the patient by at least one of electrodes232,234, and control pulses512may be balanced biphasic square pulses with an interphase interval. In other words, each of control pulses512are shown with a negative phase and a positive phase separated by an interphase interval. For example, a control pulse512may have a negative voltage for the same amount of time that it has a positive voltage. It is noted that the negative voltage phase may be before or after the positive voltage phase. Control pulses512may be delivered according to ECAP test stimulation programs stored in storage device212of IMD200, and ECAP test stimulation programs may be updated according to user input via an external programmer and/or may be updated according to a signal from sensor(s)222. In one example, control pulses512may have a pulse width of 300 microseconds (e.g., the total time of the positive phase, the negative phase, and the interphase interval is 300 microseconds). In another example, control pulses512may have a pulse width of approximately 100 μs for each phase of the bi-phasic pulse. As illustrated inFIG.5B, control pulses512may be delivered via first channel510. Delivery of control pulses512may be delivered by leads230in a guarded cathode electrode combination. For example, if leads230are linear 8-electrode leads, a guarded cathode combination is a central cathodic electrode with anodic electrodes immediately adjacent to the cathodic electrode. Second channel520is a time/voltage (and/or current) graph indicating the voltage (or current) of at least one electrode of electrodes232,234for the informed pulses. In one example, the electrodes of second channel520may partially or fully share common electrodes with the electrodes of first channel510and third channel530. Informed pulses524may also be delivered by the same leads230that are configured to deliver control pulses512. Informed pulses524may be interleaved with control pulses512, such that the two types of pulses are not delivered during overlapping periods of time. However, informed pulses524may or may not be delivered by exactly the same electrodes that deliver control pulses512. Informed pulses524may be monophasic pulses with pulse widths of greater than approximately 300 μs and less than approximately 1000 μs. In fact, informed pulses524may be configured to have longer pulse widths than control pulses512. As illustrated inFIG.5B, informed pulses524may be delivered on second channel520. Informed pulses524may be configured for passive recharge. For example, each informed pulse524may be followed by a passive recharge phase526to equalize charge on the stimulation electrodes. Unlike a pulse configured for active recharge, where remaining charge on the tissue following a stimulation pulse is instantly removed from the tissue by an opposite applied charge, passive recharge allows tissue to naturally discharge to some reference voltage (e.g., ground or a rail voltage) following the termination of the therapy pulse. In some examples, the electrodes of the medical device may be grounded at the medical device body. In this case, following the termination of informed pulse524, the charge on the tissue surrounding the electrodes may dissipate to the medical device, creating a rapid decay of the remaining charge at the tissue following the termination of the pulse. This rapid decay is illustrated in passive recharge phases526. Passive recharge phase526may have a duration in addition to the pulse width of the preceding informed pulse524. In other examples (not pictured inFIG.5B), informed pulses524may be bi-phasic pulses having a positive and negative phase (and, in some examples, an interphase interval between each phase) which may be referred to as pulses including active recharge. An informed pulse that is a bi-phasic pulse may or may not have a following passive recharge phase. Third channel530is a time/voltage (and/or current) graph indicating the voltage (or current) of at least one electrode of electrodes232,234. In one example, the electrodes of third channel530may be located on the opposite side of the lead as the electrodes of first channel510. ECAPs536may be sensed at electrodes232,234from the spinal cord of the patient in response to control pulses512. ECAPs536are electrical signals which may propagate along a nerve away from the origination of control pulses512. In one example, ECAPs536are sensed by different electrodes than the electrodes used to deliver control pulses512. As illustrated inFIG.5B, ECAPs536may be recorded on third channel530. Stimulation signals538A,538B, and538N may be sensed by leads230and may be sensed during the same period of time as the delivery of control pulses512and informed pulses524. Since the stimulation signals may have a greater amplitude and intensity than ECAPs536, any ECAPs arriving at IMD200during the occurrence of stimulation signals538may not be adequately sensed by sensing circuitry206of IMD200. However, ECAPs536may be sufficiently sensed by sensing circuitry206because each ECAP536falls after the completion of each a control pulse512and before the delivery of the next informed pulse524. As illustrated inFIG.5B, stimulation signals538and ECAPs536may be recorded on channel530. FIG.6Ais a timing diagram600A illustrating an example of electrical stimulation pulses, respective stimulation signals, and respective sensed ECAPs, in accordance with one or more techniques of this disclosure. For convenience,FIG.6Ais described with reference to IMD200ofFIG.2. As illustrated, timing diagram600A includes first channel602, a plurality of stimulation pulses604A-604N (collectively “stimulation pulses604”), second channel606, a plurality of respective ECAPs608A-608N (collectively “ECAPs608”), and a plurality of stimulation signals609A-609N (collectively “stimulation signals609”). In some examples, stimulation pulses604may represent control pulses which are configured to elicit ECAPs608that are detectible by IMD200, but this is not required. Stimulation pulses604may represent any type of pulse that is deliverable by IMD200. In the example ofFIG.6A, IMD200can deliver therapy with control pulses instead of, or without, informed pulses. Timing diagram600A ofFIG.6Amay be substantially the same as timing diagram500AFIG.5Aexcept that stimulation pulse604A and stimulation pulse604N do not evoke an ECAP that is detectible by IMD200. Although stimulation pulse604B emits ECAP608B, which is detectible by IMD200, it may be the case that IMD200does not sense enough detectible ECAPs for therapy determination in the example ofFIG.6A. As such, IMD200may determine one or more characteristics of stimulation signals609in order to determine one or more parameters of upcoming stimulation pulses following stimulation pulse604N. For example, IMD200may determine an amplitude of at least a portion of each stimulation signal of stimulation signals609and determine the one or more parameters of the upcoming stimulation pulses based on the determined amplitudes. Although stimulation signals609are illustrated as square pulses, stimulation signals609may include other shapes and/or waveforms, in some examples. In some examples, each stimulation signal of stimulation signals509may include two or more phases. Processing circuitry210of IMD200may analyze the two or more phases of stimulation signals509in order to determine therapy. FIG.6Bis a timing diagram600B illustrating another example of electrical stimulation pulses, respective stimulation signals, and respective sensed ECAPs, in accordance with one or more techniques of this disclosure. For convenience,FIG.6Bis described with reference to IMD200ofFIG.2. As illustrated, timing diagram600B includes first channel610, a plurality of control pulses612A-612N (collectively “control pulses612”), second channel620, a plurality of informed pulses624A-624N (collectively “informed pulses624”) including passive recharge phases626A-626N (collectively “passive recharge phases626”), third channel630, a plurality of respective ECAPs636A-636N (collectively “ECAPs636”), and a plurality of stimulation signals638A-638N (collectively “stimulation signals638”). Timing diagram600B ofFIG.6Bmay be substantially the same as timing diagram500BFIG.5Bexcept that control pulse612A and control pulse612N do not evoke an ECAP that is detectible by IMD200. Although control pulse612B emits ECAP636B, which is detectible by IMD200, it may be the case that IMD200does not sense enough detectible ECAPs for therapy determination in the example ofFIG.6B. As such, IMD200may determine one or more characteristics of stimulation signals638in order to determine one or more parameters of upcoming stimulation pulses following control pulse612N. For example, IMD200may determine an amplitude of at least a portion of each stimulation signal of stimulation signals638and determine the one or more parameters of the upcoming stimulation pulses based on the determined amplitudes. Although stimulation signals638are illustrated as square pulses, stimulation signals639may include other shapes and/or waveforms, in some examples. In some examples, each stimulation signal of stimulation signals638may include two or more phases. Processing circuitry210of IMD200may analyze the two or more phases of stimulation signals638in order to determine therapy. FIG.7is a timing diagram700illustrating another example of electrical stimulation pulses, respective stimulation signals, and respective ECAPs, in accordance with one or more techniques of this disclosure. For convenience,FIG.7is described with reference to IMD200ofFIG.2. As illustrated, timing diagram700includes first channel710, a plurality of control pulses712A-712N (collectively “control pulses712”), second channel720, a plurality of informed pulses724A-724B (collectively “informed pulses724”) including passive recharge phases726A-726B (collectively “passive recharge phases726”), third channel730, a plurality of respective ECAPs736A-736N (collectively “ECAPs736”), and a plurality of stimulation interference signals738A-738N (collectively “stimulation interference signals738”).FIG.7may be substantially similar toFIG.5B, except for the differences detailed below. Two or more (e.g. two) control pulses712may be delivered during each time event (e.g., window) of a plurality of time events, and each time event represents a time between two consecutive informed pulses724. For example, during each time event, a first control pulse may be directly followed by a first respective ECAP, and subsequent to the completion of the first respective ECAP, a second control pulse may be directly followed by a second respective ECAP. Informed pulses may commence following the second respective ECAP. In other examples not illustrated here, three or more control pulses712may be delivered, and respective ECAP signals sensed, during each time event of the plurality of time events. FIG.8is a timing diagram800illustrating another example of electrical stimulation pulses, respective stimulation signals, and respective ECAPs, in accordance with one or more techniques of this disclosure. For convenience,FIG.8is described with reference to IMD200ofFIG.2. As illustrated, timing diagram800includes first channel810, a plurality of control pulses812A-812N (collectively “control pulses812”), second channel820, a plurality of informed pulses824A-824B (collectively “informed pulses824”) including passive recharge phases826A-826B (collectively “passive recharge phases826”), third channel830, respective ECAPs836B (collectively “ECAPs836”), and a plurality of stimulation interference signals838A-838N (collectively “stimulation interference signals838”). Timing diagram800ofFIG.8may be substantially the same as timing diagram700FIG.7except that control pulses812A and control pulses812N do not evoke ECAPs that are detectible by IMD200. Although control pulses812B emit ECAPs836B, which are detectible by IMD200, it may be the case that IMD200does not sense enough detectible ECAPs for therapy determination in the example ofFIG.8. As such, IMD200may determine one or more characteristics of stimulation signals838in order to determine one or more parameters of upcoming stimulation pulses following control pulses812N. FIG.9is a flow diagram illustrating an example operation for controlling stimulation based on one or more sensed ECAPs, in accordance with one or more techniques of this disclosure. For convenience,FIG.9is described with respect to IMD200ofFIG.2. However, the techniques ofFIG.9may be performed by different components of IMD200or by additional or alternative medical devices. Stimulation generation circuitry202of IMD200may deliver electrical stimulation therapy to a patient (e.g., patient105). In order to control the electrical stimulation therapy, processing circuitry210may direct the delivery of at least some stimulation pulses according to therapy stimulation programs214of storage device212, where the electrical stimulation therapy may include a plurality of control pulses and/or informed pulses. Informed pulses may, in some cases, produce ECAPs detectable by IMD200. However, in other cases, an electrical polarization of an informed pulse may interfere with sensing of an ECAP responsive to the informed pulse. In some examples, to evoke ECAPs which are detectable by IMD200, stimulation generation circuitry202delivers a plurality of control pulses, the plurality of control pulses being interleaved with at least some informed pulses of the plurality of informed pulses. Processing circuitry210may control the delivery of control pulses according to ECAP test stimulation programs or ECAP storage instructions216. Since the control pulses may be interleaved with the informed pulses, sensing circuitry206of IMD200may detect a plurality of ECAPs, where sensing circuitry206is configured to detect each ECAP of the plurality of ECAPs after a control pulse of the plurality of control pulses and prior to a subsequent informed pulse of the plurality of informed pulses. In this way, IMD200may evoke the plurality of ECAPs in target tissue by delivering control pulses without the informed pulses obstructing IMD200from sensing the ECAPs. As illustrated inFIG.9, processing circuitry210directs stimulation generation circuitry202to deliver a control pulse (902). Stimulation generation circuitry202may deliver the control pulse to target tissue of patient105via any combination of electrodes232,234of leads230. In some examples, the control pulse may include a balanced, bi-phasic square pulse that employs an active recharge phase. However, in other examples, the control pulse may include a monophasic pulse followed by a passive recharge phase. In other examples, the control pulse may include an imbalanced bi-phasic portion and a passive recharge portion. Although not necessary, a bi-phasic control pulse may include an interphase interval between the positive and negative phase to promote propagation of the nerve impulse in response to the first phase of the bi-phasic pulse. The control pulse may have a pulse width of 300 μs, such as a bi-phasic pulse with each phase having a duration of approximately 100 μs. After delivering the control pulse, IMD200attempts to detect an ECAP (904). For example, sensing circuitry206may monitor signals from any combination of electrodes232,234of leads230. In some examples, sensing circuitry206detects ECAPs from a particular combination of electrodes232,234. In some cases, the particular combination of electrodes for sensing ECAPs includes different electrodes than a set of electrodes232,234used to deliver stimulation pulses. Alternatively, in other cases, the particular combination of electrodes used for sensing ECAPs includes at least one of the same electrodes as a set of electrodes used to deliver stimulation pulses to patient105. In some examples, the particular combination of electrodes used for sensing ECAPs may be located on an opposite side of leads230from the particular combination of electrodes used to deliver stimulation pulses. IMD200may detect an ECAP responsive to the control pulse. IMD200may measure one or more characteristics of the responsive ECAP, such as ECAP amplitude, ECAP duration, peak-to-peak durations, or any combination thereof. For example, to measure an amplitude of the ECAP, IMD200may determine a voltage difference between an N1 ECAP peak and a P2 ECAP peak. Processing circuitry210may store the ECAP signals, characteristic value, or any other related data as ECAP information. At block906, processing circuitry210determines if the ECAP amplitude of the responsive ECAP is greater than an ECAP amplitude threshold. If the ECAP amplitude is greater than the ECAP amplitude threshold (“YES” branch of block906), processing circuitry210activates/continues a decrement mode (908) in IMD200. For example, if the decrement mode is already “turned on” in IMD200when processing circuitry determines that the ECAP amplitude is greater than the ECAP amplitude threshold, then processing circuitry210maintains IMD200in the decrement mode. If the decrement mode is “turned off” in IMD200when processing circuitry determines that the ECAP amplitude is greater than the ECAP amplitude threshold, then processing circuitry210activates the decrement mode. In some examples, the decrement mode may be stored in storage device212as a part of control policy213. The decrement mode may be a set of instructions which causes IMD200to decrease one or more parameter values of each consecutive informed pulse from a respective predetermined value (e.g., a value determined by a stimulation program) and decrease one or more parameter values of each consecutive control pulse from a respective predetermined value (e.g., a value determined by a stimulation program). In other words, the parameter values may be reduced from the values that IMD200would use to define respective pulses in the absence of the ECAP amplitude exceeding the threshold ECAP amplitude. For example, when the decrement mode is activated, processing circuitry210may decrease an electric current amplitude of each consecutive informed pulse delivered by IMD200and decrease an electric current amplitude of each consecutive control pulse delivered by IMD200. After processing circuitry210activates/continues the decrement mode, the example operation may return to block902and IMD200may deliver another control pulse. If the ECAP amplitude is not greater than the ECAP amplitude threshold (“NO” branch of block906), processing circuitry210determines whether the decrement mode is activated in IMD200(910). If the decrement mode is activated in IMD200(“YES” branch of block910), processing circuitry210deactivates the decrement mode and activates an increment mode (912) in IMD200. In some examples, the increment mode may be stored in storage device212as a part of control policy213. The increment mode may be a set of instructions which causes IMD200to increase one or more parameter values of each consecutive informed pulse and increase one or more parameter values of each consecutive control pulse. For example, when the increment mode is activated, processing circuitry210may increase an electric current amplitude of each consecutive informed pulse delivered by IMD200and increase an electric current amplitude of each consecutive control pulse delivered by IMD200. After processing circuitry210deactivates the decrement mode and activates the increment mode, the example operation may return to block902and IMD200may deliver another control pulse. When the example operation ofFIG.9arrives at block910and the decrement mode is not activated in IMD200(“NO” branch of block910), processing circuitry210determines whether the increment mode is activated (914) in IMD200. If the increment mode is activated in IMD200(“YES” branch of block914), processing circuitry210may complete the increment mode (916) in IMD200. In some examples, to complete the increment mode, processing circuitry210may increase the electric current amplitude of each consecutive informed pulse delivered by IMD200and increase the electric current amplitude of each consecutive control pulse delivered by IMD200until the pulse amplitude of the stimulation pulses reach an electric current amplitude (e.g., a predetermined value that may be set by the stimulation program selected for therapy) of the stimulation pulses delivered by IMD200prior to the activation of the decrement mode. In this manner, the process may not be referred to as a fully closed-loop system. Put another way, IMD200may monitor the high end (ECAP amplitude threshold) for adjusting stimulation pulses instead of monitoring any low end of the sensed ECAP amplitude. For example, IMD200may continue to increase the current amplitude of consecutive informed pulses without any feedback from the sensed ECAP, unless the sensed ECAP value again exceeds the ECAP amplitude threshold. After processing circuitry210completes the increment mode, the example operation may return to block902and IMD200may deliver another control pulse. When the example operation ofFIG.9arrives at block914and the increment mode is not activated in IMD200(“NO” branch of block914), processing circuitry210maintains stimulation (918) in IMD200. AlthoughFIG.9describes adjusting both informed pulses and control pulses, the technique ofFIG.9may also apply when IMD200is delivering only control pulses (e.g., without informed pulses) to the patient for therapy. FIG.10illustrates a voltage/current/time graph1000which plots control pulse current amplitude1002, informed pulse current amplitude1004, ECAP voltage amplitude1008, and second ECAP voltage amplitude1010as a function of time, in accordance with one or more techniques of this disclosure. Additionally,FIG.10illustrates a threshold ECAP amplitude1006. For convenience,FIG.10is described with respect to IMD200ofFIG.2. However, the techniques ofFIG.10may be performed by different components of IMD200or by additional or alternative medical devices. Voltage/current/time graph1000illustrates a relationship between sensed ECAP voltage amplitude and stimulation current amplitude. For example, control pulse current amplitude1002and informed pulse current amplitude1004are plotted alongside ECAP voltage amplitude1008as a function of time, thus showing how stimulation current amplitude changes relative to ECAP voltage amplitude. In some examples, IMD200delivers a plurality of control pulses and a plurality of informed pulses at control pulse current amplitude1002and informed pulse current amplitude1004, respectively. Initially, IMD200may deliver a first set of control pulses, where IMD200delivers the first set of control pulses at current amplitude I2. Additionally, IMD200may deliver a first set of informed pulses, where IMD200delivers the first set of control pulses at current amplitude I1. I1 and I2 may be referred to as a predetermined value for the amplitude of respective control and informed pulses. This predetermined value may be a programmed value or otherwise selected value that a stimulation program has selected to at least partially define stimulation pulses to the patient in the absence of transient conditions (e.g., when the ECAP amplitude is below a threshold ECAP value). The first set of control pulses and the first set of informed pulses may be delivered prior to time T1. In some examples, I1 is 8 milliamps (mA) and I2 is 4 mA. Although control pulse current amplitude1002is shown as greater than informed pulse current amplitude1004, control pulse current amplitude1002may be less than or the same as informed pulse current amplitude1004in other examples. While delivering the first set of control pulses and the first set of informed pulses, IMD200may record ECAP voltage amplitude1008. During dynamic and transient conditions which occur in patient105such as coughing, sneezing, laughing, Valsalva maneuvers, leg lifting, cervical motions, or deep breathing, ECAP voltage amplitude1008may increase if control pulse current amplitude1002and informed pulse current amplitude1004are held constant. This increase in ECAP voltage amplitude1008may be caused by a reduction in the distance between the electrodes and nerves. For example, as illustrated inFIG.10, ECAP voltage amplitude1008may increase prior to time T1 while stimulation current amplitude is held constant. An increasing ECAP voltage amplitude1008may indicate that patient105is at risk of experiencing transient overstimulation due to the control pulses and the informed pulses delivered by IMD200. To prevent patient105from experiencing transient overstimulation, IMD200may decrease control pulse current amplitude1002and informed pulse current amplitude1004in response to ECAP voltage amplitude1008exceeding the threshold ECAP amplitude1006. For example, if IMD200senses an ECAP having an ECAP voltage amplitude1008meeting or exceeding threshold ECAP amplitude1006, as illustrated inFIG.10at time T1, 1 MB200may enter a decrement mode where control pulse current amplitude1002and informed pulse current amplitude1004are decreased. In some examples, the threshold ECAP amplitude1006is selected from a range of approximately 5 microvolts (μV) to approximately 30 μV, or from a range of approximately 10 microvolts (μV) to approximately 20 μV. For example, the threshold ECAP amplitude1006is 15 μV. In other examples, the threshold ECAP amplitude1006is less than or equal to 5 μV or greater than or equal to 30 μV. IMD200may respond relatively quickly to the ECAP voltage amplitude1008exceeding the threshold ECAP amplitude1006. For example, IMD may be configured to detect threshold exceeding ECAP amplitudes within 20 milliseconds (ms). If IMD200delivers control pulses at a frequency of 50 Hz, the period of time for a single sample that includes delivering the control pulse and detecting the resulting ECAP signal may be 20 ms or less. However, since an ECAP signal may occur within one or two ms of delivery of the control pulse, IMD200may be configured to detect an ECAP signal exceeding the threshold ECAP amplitude in less than 10 ms. For transient conditions, such as a patient coughing or sneezing, these sampling periods would be sufficient to identify ECAP amplitudes exceeding the threshold and a responsive reduction in subsequent pulse amplitudes before the ECAP amplitude would have reached higher levels that may have been uncomfortable for the patient. The decrement mode may, in some cases, be stored in storage device212of IMD200as a part of control policy213. In the example illustrated inFIG.10, the decrement mode is executed by IMD200over a second set of control pulses and a second set of informed pulses which occur between time T1 and time T2. In some examples, to execute the decrement mode, IMD200decreases the control pulse current amplitude1002of each control pulse of the second set of control pulses according to a first function with respect to time. In other words, 1 MB200decreases each consecutive control pulse of the second set of control pulses proportionally to an amount of time elapsed since a previous control pulse. Additionally, during the decrement mode, IMD200may decrease the informed pulse current amplitude1004of each informed pulse of the second set of informed pulses according to a second function with respect to time. Although linear first and second functions are shown, the first and/or second function may be non-linear, such as logarithmic (e.g., the rate of change decreases over time), exponential (e.g., the rate of change increases over time), parabolic, step-wise, multiple different functions, etc., in other examples. During a period of time in which IMD200is operating in the decrement mode (e.g., time interval T2-T1), ECAP voltage amplitude1008of ECAPs sensed by IMD200may be greater than or equal to threshold ECAP amplitude1006. In the example illustrated inFIG.2, IMD200may sense an ECAP at time T2, where the ECAP has an ECAP voltage amplitude1008that is less than threshold ECAP amplitude1006. The ECAP sensed at time T2 may, in some cases, be the first ECAP sensed by IMD200with a below-threshold amplitude since IMD200began the decrement mode at time T1. Based on sensing the ECAP at time T2, IMD200may deactivate the decrement mode and activate an increment mode. The increment mode may, in some cases, be stored in storage device212of IMD200as a part of control policy213. IMD200may execute the increment mode over a third set of control pulses and a third set of informed pulses which occur between time T2 and time T3. In some examples, to execute the increment mode, IMD200increases the control pulse current amplitude1002of each control pulse of the third set of control pulses according to a third function with respect to time. In other words, IMD200increases each consecutive control pulse of the third set of control pulses proportionally to an amount of time elapsed since a previous control pulse. Additionally, during the increment mode, IMD200may increase the informed pulse current amplitude1004of each informed pulse of the third set of informed pulses according to a fourth function with respect to time. As shown inFIG.10, IMD200is configured to decrease amplitude at a faster rate than increasing amplitude after ECAP voltage amplitude1008falls below threshold ECAP amplitude1006. In other examples, the rate of change during the decrement mode and increment mode may be similar. In other examples, IMD200may be configured to increase amplitude of informed and control pulses at a faster rate than when decreasing amplitude. The rate of change in amplitude of the pulses may be relatively instantaneously (e.g., a very fast rate) in other examples. For example, in response to ECAP voltage amplitude1008exceeding threshold ECAP amplitude1006, IMD200may immediately drop the amplitude of one or both of control pulse current amplitude1002or informed pulse current amplitude1004to a predetermined or calculated value. Then, in response to ECAP voltage amplitude1008dropping back below threshold ECAP amplitude1006, IMD200may enter increment mode as described above. When control pulse current amplitude1002and informed pulse current amplitude1004return to current amplitude I2 and current amplitude I1, respectively, IMD200may deactivate the increment mode and deliver stimulation pulses at constant current amplitudes. By decreasing stimulation in response to ECAP amplitudes exceeding a threshold and subsequently increasing stimulation in response to ECAP amplitudes falling below the threshold, IMD200may prevent patient105from experiencing transient overstimulation or decrease a severity of transient overstimulation experienced by patient105, whether the decrease is in terms of the length of the experience, the relative intensity, or both. FIG.10is described in the situation in which IMD200delivers both control pulse and informed pulses. However, IMD200may apply the technique ofFIG.10to the situation in which only control pulses are delivered to provide therapy to the patient. In this manner, IMD200would similarly enter a decrement mode or increment mode for control pulse current amplitude1002based on the detected ECAP voltage amplitude1008without adjusting the amplitude or other parameter of any other type of stimulation pulse. FIG.11is a flow diagram illustrating an example operation for controlling storage of ECAP information.FIG.11is described with respect to IMD200, processing circuitry210and long-term memory220ofFIG.2. However, the techniques ofFIG.11may be performed by different components of IMD200, IMD110, external programmer150, or by additional or alternative medical devices. As shown in the example ofFIG.11, processing circuitry210receives ECAP information (1100). Processing circuitry210may receive ECAP information from sensing circuitry206. In some examples, processing circuitry210may generate some or all of the ECAP information from ECAP signals received from sensing circuitry206. Typically, processing circuitry210may store the ECAP information in temporary memory218. If processing circuitry210does not receive a trigger signal (“NO” branch of block1102), processing circuitry210may continue to receive ECAP information and continue normal function such as adjusting stimulation parameters based on an ECAP characteristic value. If processing circuitry210does receive a trigger signal (“YES” branch of block1102), processing circuitry210stores at least a portion of the ECAP information in a memory (1104). For example, processing circuitry210may move at least a portion of ECAP information from temporary memory218to long-term memory220. In some examples, processing circuitry210may store other information in addition to the ECAP information. For example, processing circuitry210may also store acceleration data (e.g., posture state and/or activity information) with the ECAP information. Processing circuitry210may also store time stamp or other data with the ECAP information and any other information in order to correlate the information with events that occurred at the same time, for example. FIG.12is a flow diagram illustrating an example operation for sensing ECAP signals and storing ECAP information.FIG.12is described with respect to IMD200, processing circuitry210and long-term memory220ofFIG.2. However, the techniques ofFIG.12may be performed by different components of IMD200, IMD110, external programmer150, or by additional or alternative medical devices. As shown in the example ofFIG.12, processing circuitry210controls stimulation generation circuitry202to deliver a stimulation pulse (1200). The stimulation pulse may or may not be configured to contribute to therapy, but an ECAP signal may be detected as a result of the stimulation pulse. Sensing circuitry206the senses the resulting ECAP signal (1202). Processing circuitry210then receives ECAP information from sensing circuitry206(1204). The ECAP information may be a digitized waveform or include already determined ECAP characteristic values. Processing circuitry210then stores the received ECAP information in temporary memory2018(1206). If processing circuitry210does not receive a trigger signal (“NO” branch of block1208), processing circuitry210may continue to deliver stimulation pulses (1200) and receive ECAP information and continue normal function such as adjusting stimulation parameters based on an ECAP characteristic value. If processing circuitry210does receive a trigger signal (“YES” branch of block1208), processing circuitry210selects a portion of the ECAP information from temporary memory218(1210). For example, processing circuitry210may select ECAP information that represents ECAP signals sensed over a predetermined period of time (e.g., seconds, minutes, or longer) or a predetermined number of ECAP signals. Processing circuitry210then stores the selected portion of the ECAP information in long-term memory220(1212). For example, processing circuitry210may move at least a portion of ECAP information from temporary memory218to long-term memory220. In some examples, processing circuitry210may continue to select ECAP information received for a predetermined period of time after receiving the trigger signal and store that new ECAP information in long-term memory220. This ECAP information before and after the trigger signal may be flagged as associated with a single event. Processing circuitry210may then continue to deliver another stimulation pulse (1200). In some examples, storing the selected portion of the ECAP information in long-term memory enables the system to determine and/or display data temporally close to the trigger signal that may represent conditions of the patient before and/or after the trigger signal. In some examples, processing circuitry210may store other information in addition to the ECAP information in the temporary memory and the long-term memory as requested. For example, processing circuitry210may also store acceleration data (e.g., posture state and/or activity information) with the ECAP information. Processing circuitry210may also store time stamp or other data with the ECAP information and any other information in order to correlate the information with events that occurred at the same time, for example. FIG.13is a flow diagram illustrating an example operation for adjusting a rate of sensing ECAP signals.FIG.13is described with respect to IMD200, processing circuitry210and long-term memory220ofFIG.2. However, the techniques ofFIG.13may be performed by different components of IMD200, IMD110, external programmer150, or by additional or alternative medical devices. As shown in the example ofFIG.12, processing circuitry210controls stimulation generation circuitry202to deliver a stimulation pulse (1300). The stimulation pulse may or may not be configured to contribute to therapy, but an ECAP signal may be detected as a result of the stimulation pulse. Sensing circuitry206the senses the resulting ECAP signal (1302). Processing circuitry210then receives ECAP information from sensing circuitry206(1304). The ECAP information may be a digitized waveform or include already determined ECAP characteristic values. If processing circuitry210does not receive a trigger signal (“NO” branch of block1208), processing circuitry210may continue to deliver stimulation pulses (1300) and receive ECAP information and continue normal function such as adjusting stimulation parameters based on an ECAP characteristic value. If processing circuitry210does receive a trigger signal (“YES” branch of block1306), processing circuitry210stores the ECAP information in a memory, such as long-term memory202(1308). In addition to storing the ECAP information, processing circuitry210increases the rate of sensing ECAP signals (1310). This increasing the rate of sensing may include increasing the rate at which processing circuitry210controls stimulation generation circuitry202to deliver stimulation pulses and the rate at which sensing circuitry captures ECAP signals elicited from each of the delivered stimulation pulses. In this manner, processing circuitry210can increase the fidelity of ECAP information may increasing the frequency at which ECAP signals are captured. It is noted that some or all of the techniques described inFIGS.11,12, and13may be used together. For example, processing circuitry210may store ECAP information in long-term memory and increase the rate of sensing ECAP signals to achieve higher fidelity ECAP information for later analysis. In some examples, processing circuitry210may store other information in addition to the ECAP information in the temporary memory and the long-term memory as requested. For example, processing circuitry210may also store acceleration data (e.g., posture state and/or activity information) with the ECAP information. Processing circuitry210may also store time stamp or other data with the ECAP information and any other information in order to correlate the information with events that occurred at the same time, for example. This application generally describes the storing of ECAP information in long-term storage in response to receiving a trigger signal. However, additional or alternative information may be stored by the system in response to receiving the trigger signal. For example, the system may store local field potential (LFP) information in addition to, or instead of, the ECAP information described herein. The system may store LFP information representative of LFP signals sensed by one or more electrode combinations. The LFP signals may be sensed by one or more electrode combinations located near the spinal cord and/or in the brain. The system may store the LFP information and the ECAP information in a temporary memory and store both LFP information and ECAP information in the long-term memory in response to the trigger signal. In some examples, the system may sample the LFP signals are a higher, lower, or the same rate as the ECAP signal. The system may store the LFP information in the frequency domain. The system may be configured to control a display to present the ECAP information and LFP information (and/or other type of sensed physiological information) together overlapping in time to illustrate correlations between the ECAP information and LFP information. In other examples, the system may correlate the ECAP information with the LFP information to identify and/or confirm that one or more patient events occurred. The following examples are described herein. Example 1: A system includes a memory; and processing circuitry configured to: receive evoked compound action potential (ECAP) information, wherein the ECAP information comprises information from a plurality of evoked compound action potential (ECAP) signals; receive a trigger signal requesting long-term storage of at least a portion of the ECAP information in the memory; and responsive to receiving the trigger signal, store the at least portion of the ECAP information in the memory. Example 2: The system of example 1, further includes stimulation generation circuitry configured to deliver electrical stimulation to a patient, wherein the electrical stimulation therapy comprises a plurality of stimulation pulses; and sensing circuitry configured to sense the plurality of ECAP signals, wherein the sensing circuitry is configured to sense each ECAP signal of the plurality of ECAP signals elicited by a respective stimulation pulse of the plurality of stimulation pulses, wherein the processing circuitry is configured to receive the ECAP signals from the sensing circuitry as the ECAP information. Example 3: The system of example 2, wherein the processing circuitry is configured to: responsive to receiving the trigger signal, control the sensing circuitry to increase a rate at which the sensing circuitry senses subsequent ECAP signals; and store subsequent ECAP information comprising the subsequent ECAP signals in the memory. Example 4: The system of any of examples 1 through 3, wherein the ECAP information comprises at least one characteristic value representing respective ECAP signals of the plurality of ECAP signals, wherein the characteristic value comprises at least one of an amplitude value, a slope value, or an area under peak value. Example 5: The system of any of examples 1 through 4, wherein the ECAP information comprises a plurality of waveforms representing respective ECAP signals of the plurality of ECAP signals. Example 6: The system of any of examples 1 through 5, wherein the memory comprises a long-term memory, and wherein the processing circuitry is configured to store the received ECAP information in a temporary memory, and wherein the processing circuitry is configured to delete ECAP information stored in the temporary memory in response to a predetermined period of time elapsing. Example 7: The system of any of examples 1 through 6, further comprising communication circuitry configured to transmit the stored ECAP information to an external device. Example 8: The system of example 7, further comprising an external device comprising a display; and an implantable medical device comprising the memory, the processing circuitry, and the communication circuitry configured to transmit the stored ECAP information to the external device, wherein the external device is configured to present, via the display, one or more representations of the stored ECAP information. Example 9: The system of any of examples 1 through 8, wherein the trigger signal comprises a request from an external device to store the ECAP information. Example 10: The system of any of examples 1 through 9, wherein the trigger signal comprises a housing tap from a user, and wherein the processing circuitry is configured to receive the housing tap by: receiving accelerometer data from an accelerometer within a housing of an implantable medical device; and determining that the accelerometer data indicates a user tapped the implantable medical device. Example 11: The system of any of examples 1 through 10, wherein the trigger signal comprises an indication that a characteristic of one ECAP signal of the plurality of ECAP signals exceeds a threshold. Example 12: The system of any of examples 1 through 11, wherein the trigger signal comprises an indication that a user changed one or more stimulation parameter values defining electrical stimulation deliverable to a patient. Example 13: The system of any of examples 1 through 12, wherein the processing circuitry is configured to, responsive to receiving the trigger signal, select the at least portion of the ECAP information representative of one or more ECAP signals of the plurality of ECAP signals sensed between an initial time and a final time, the initial time occurring at a first period of time prior to receiving the trigger signal and the final time occurring at a second period of time after receiving the trigger signal. Example 14: The system of any of examples 1 through 13, wherein the processing circuitry adds a marker representative of the trigger signal to the at least portion of the ECAP information stored in the memory, wherein the marker indicates a time of the trigger signal with respect to sensed ECAP signals of the ECAP information. Example 15: The system of any of examples 1 through 14, wherein the processing circuitry is configured to, responsive to receiving the trigger signal, store acceleration data representing at least one of a posture state or an activity of a patient corresponding to a same time the ECAP signals were generated. Example 16: The system of any of examples 1 through 15, further comprising an implantable medical device comprising the memory and the processing circuitry. Example 17: A method includes receiving, by processing circuitry, evoked compound action potential (ECAP) information, wherein the ECAP information comprises information from a plurality of evoked compound action potential (ECAP) signals; receiving, by the processing circuitry, a trigger signal requesting long-term storage of at least a portion of the ECAP information in a memory; and responsive to receiving the trigger signal, storing, by the processing circuitry, the at least portion of the ECAP information in the memory. Example 18: The method of example 17, further includes delivering, by stimulation generation circuitry, electrical stimulation to a patient, wherein the electrical stimulation therapy comprises a plurality of stimulation pulses; sensing, by sensing circuitry, the plurality of ECAP signals by sensing each ECAP signal of the plurality of ECAP signals elicited by a respective stimulation pulse of the plurality of stimulation pulses, and receiving, by the processing circuitry, the ECAP signals from the sensing circuitry as the ECAP information. Example 19: The method of example 18, further includes responsive to receiving the trigger signal, controlling the sensing circuitry to increase a rate at which the sensing circuitry senses subsequent ECAP signals; and storing subsequent ECAP information comprising the subsequent ECAP signals in the memory. Example 20: The method of any of examples 17 through 19, wherein the ECAP information comprises at least one characteristic value representing respective ECAP signals of the plurality of ECAP signals, wherein the characteristic value comprises at least one of an amplitude value, a slope value, or an area under peak value. Example 21: The method of any of examples 17 through 20, wherein the ECAP information comprises a plurality of waveforms representing respective ECAP signals of the plurality of ECAP signals. Example 22: The method of any of examples 17 through 21, wherein the memory comprises a long-term memory, and wherein the method further comprises: storing the received ECAP information in a temporary memory; and deleting ECAP information stored in the temporary memory in response to a predetermined period of time elapsing. Example 23: The method of any of examples 17 through 22, further comprising transmitting, by communication circuitry, the stored ECAP information to an external device. Example 24: The method of example 23, further comprising presenting, via a display of an external device, one or more representations of the stored ECAP information. Example 25: The method of any of examples 17 through 24, wherein the trigger signal comprises a request from an external device to store the ECAP information. Example 26: The system of any of examples 17 through 25, wherein the trigger signal comprises a housing tap from a user, and wherein receiving the housing tap comprises: receiving accelerometer data from an accelerometer within a housing of an implantable medical device; and determining that the accelerometer data indicates a user tapped the implantable medical device. Example 27: The method of any of examples 17 through 26, wherein the trigger signal comprises an indication that a characteristic of one ECAP signal of the plurality of ECAP signals exceeds a threshold. Example 28: The method of any of examples 17 through 27, wherein the trigger signal comprises an indication that a user changed one or more stimulation parameter values defining electrical stimulation deliverable to a patient. Example 29: The method of any of examples 17 through 28, further comprising, responsive to receiving the trigger signal, selecting the at least portion of the ECAP information representative of one or more ECAP signals of the plurality of ECAP signals sensed between an initial time and a final time, the initial time occurring at a first period of time prior to receiving the trigger signal and the final time occurring at a second period of time after receiving the trigger signal. Example 30: The method of any of examples 17 through 29, further comprising adding a marker representative of the trigger signal to the at least portion of the ECAP information stored in the memory, wherein the marker indicates a time of the trigger signal with respect to sensed ECAP signals of the ECAP information. Example 31: A computer-readable medium including instructions that, when executed by a processor, causes the processor to receive evoked compound action potential (ECAP) information, wherein the ECAP information comprises information from a plurality of evoked compound action potential (ECAP) signals; receive a trigger signal requesting long-term storage of at least a portion of the ECAP information in a memory; and responsive to receiving the trigger signal, store the at least portion of the ECAP information in the memory. The techniques described in this disclosure may be implemented, at least in part, in hardware, software, firmware, or any combination thereof. For example, various aspects of the techniques may be implemented within one or more microprocessors, DSPs, ASICs, FPGAs, or any other equivalent integrated or discrete logic QRS circuitry, as well as any combinations of such components, embodied in external devices, such as physician or patient programmers, stimulators, or other devices. The terms “processor” and “processing circuitry” may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry, or any other equivalent circuitry, and alone or in combination with other digital or analog circuitry. For aspects implemented in software, at least some of the functionality ascribed to the systems and devices described in this disclosure may be embodied as instructions on a computer-readable storage medium such as RAM, DRAM, SRAM, FRAM, magnetic discs, optical discs, flash memories, or forms of EPROM or EEPROM. The instructions may be executed to support one or more aspects of the functionality described in this disclosure. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules. Depiction of different features as modules or units is intended to highlight different functional aspects and does not necessarily imply that such modules or units must be realized by separate hardware or software components. Rather, functionality associated with one or more modules or units may be performed by separate hardware or software components or integrated within common or separate hardware or software components. Also, the techniques could be fully implemented in one or more circuits or logic elements. The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including an IMD, an external programmer, a combination of an IMD and external programmer, an integrated circuit (IC) or a set of ICs, and/or discrete electrical circuitry, residing in an IMD and/or external programmer.
152,986
11857794
DETAILED DESCRIPTION In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that the embodiments may be combined, or that other embodiments may be utilized and that structural, logical and electrical changes may be made without departing from the spirit and scope of the present invention. References to “an”, “one”, or “various” embodiments in this disclosure are not necessarily to the same embodiment, and such references contemplate more than one embodiment. The following detailed description provides examples, and the scope of the present invention is defined by the appended claims and their legal equivalents. Clinically, electroencephalography (EEG) and magnetoencephalography (MEG) studies are used to evaluate separate temporal and spatial components of the cerebral pain response. In clinical contexts, EEG refers to the recording of the brain's spontaneous electrical activity over a period of time and at distinctive scalp locations. EEG measures voltage fluctuations resulting from ionic current within the neurons of the brain, and can be recorded from multiple electrodes placed on the scalp. Certain EEG patterns may be associated with patient vulnerability to experience chronic pain in persons with spinal cord injury. Chronic neuropathic pain may also be associated with changes in EEG characteristics, including increased power density and peak frequency in the low frequency ranges. The ionic currents occurring naturally in the brain that produce the EEG signal also generate magnetic field, which can be measured as MEG. MEG is a functional neuroimaging technique for mapping brain activity by recording magnetic fields. It provides timing as well as spatial information about brain activity. Evoked potential is an electrical potential recorded from the nervous system, such as a brain, following presentation of a stimulus, which may be distinct from spontaneous neural potentials. The stimulus may be delivered through sight, hearing, touch, or electrical, mechanical, or pharmacological stimulus. The evoked electrical potentials travel along nerves to the brain, and can be recorded with electrodes attached to the scalp and skin over various peripheral sensory nerves. Close monitoring of patient brain electromagnetic activity may provide an objective assessment of pain, and may be used to improve pain therapy efficacy. Disclosed herein are systems, devices, and methods for or assessing pain of a subject, and optionally programming pain therapy based on the pain assessment. In various embodiments, the present system may include sensors configured to sense physiological signals indicative of brain electromagnetic activity, such as an EEG signal, a MEG signal, or a brain-evoked potential. A pain analyzer circuit may generate a pain score using signal metrics extracted from the brain electromagnetic activity signals. The system may include a neurostimulator that can deliver a pain therapy according to the pain score. The present system may be implemented using a combination of hardware and software designed to provide a closed-loop pain management regimen to increase therapeutic efficacy, increase patient satisfaction for neurostimulation therapies, reduce side effects, and/or increase device longevity. The present system may be applied in any neurostimulation (neuromodulation) therapies, including but not limited to SCS, DBS, PNS, FES, motor cortex stimulation, sacral nerve stimulation, radiofrequency ablation, and vagus nerve stimulation (VNS) therapies. In various examples, instead of providing closed-loop pain therapies, the systems, devices, and methods described herein may be used to monitor the patient and assess pain that either occurs spontaneously or is induced by nerve block procedures or radiofrequency ablation therapies, or side effects like paresthesia caused by the stimulation therapy. The patient monitoring may include generating recommendations to the patient or a clinician regarding pain treatment. FIG.1illustrates, by way of example and not limitation, a neuromodulation system100for managing pain of a subject such as a patient with chronic pain, and portions of an environment in which the neuromodulation system100may operate. The neuromodulation system100may include an implantable system110that may be associated with a body199of the subject, and an external system130in communication with the implantable system110via a communication link120. The implantable system110may include an ambulatory medical device (AMD), such as an implantable neuromodulator device (IND)112, a lead system114, and one or more electrodes116. The IND112may be configured for subcutaneous implant in a patient's chest, abdomen, upper gluteal surface, or other parts of the body199. The IND112may be configured as a monitoring and diagnostic device. The IND112may include a hermetically sealed can that houses sensing circuitry to sense physiological signals from the patient via sensing electrodes or ambulatory sensors associated with the patient and in communication with the IND112, such as the one or more electrodes116. In some examples, the sensing electrodes or the ambulatory sensors may be included within the IND112. The physiological signals, when measured during a pain episode, may be correlative to severity of the pain. In an example, the one or more electrodes116may be surgically positioned on at least a portion of the brain to sense brain activity therein. The brain activity may include brain electromagnetic activity such as represented as an EEG, a MEG, or brain-evoked potentials. The IND112may characterize patient pain based on the sensed physiological signals, such as to determine an onset, intensity, severity, duration, or patterns of the pain experienced by the subject. The IND112may generate an alert to indicate the pain episode or pain exacerbation, or efficacy of a pain therapy, and present the alert to a clinician. The IND112may alternatively be configured as a therapeutic device for treating or alleviating the pain. In addition to the pain monitoring circuitry, the IND112may further include a therapy unit that can generate and deliver energy or modulation agents to a target tissue. The energy may include electrical, magnetic, thermal, or other types of energy. In some examples, the IND112may include a drug delivery system such as a drug infusion pump that can deliver pain medication to the patient, such as morphine sulfate or ziconotide, among others. The IND112may include electrostimulation circuitry that generates electrostimulation pulses to stimulate a neural target via the electrodes116operably connected to the IND112. In an example, the electrodes116may be positioned on or near a spinal cord, and the electrostimulation circuitry may be configured to deliver SCS to treat pain. In another example, the electrodes116may be surgically placed at other neural targets such as a brain or a peripheral neutral tissue, and the electrostimulation circuitry may be configured to deliver brain or peripheral stimulations. Examples of electrostimulation may include deep brain stimulation (DBS), trigeminal nerve stimulation, occipital nerve stimulation, vagus nerve stimulation (VNS), sacral nerve stimulation, sphenopalatine ganglion stimulation, sympathetic nerve modulation, adrenal gland modulation, baroreceptor stimulation, or transcranial magnetic stimulation, spinal cord stimulation (SCS), dorsal root ganglia (DRG) stimulation, motor cortex stimulation (MCS), transcranial direct current stimulation (tDCS), transcutaneous spinal direct current stimulation (tsDCS), pudendal nerve stimulation, multifidus muscle stimulation, transcutaneous electrical nerve stimulation (TENS), tibial nerve stimulation, among other peripheral nerve or organ stimulation. The IND112may additionally or alternatively provide therapies such as radiofrequency ablation (RFA), pulsed radiofrequency ablation, ultrasound therapy, high-intensity focused ultrasound (HIFU), optical stimulation, optogenetic therapy, magnetic stimulation, other peripheral tissue stimulation therapies, other peripheral tissue denervation therapies, or nerve blocks or injections. In various examples, the electrodes116may be distributed in one or more leads of the lead system114electrically coupled to the IND112. In an example, the lead system114may include a directional lead that includes at least some segmented electrodes circumferentially disposed about the directional lead. Two or more segmented electrodes may be distributed along a circumference of the lead. The actual number and shape of leads and electrodes may vary according to the intended application. Detailed description of construction and method of manufacturing percutaneous stimulation leads are disclosed in U.S. Pat. No. 8,019,439, entitled “Lead Assembly and Method of Making Same,” and U.S. Pat. No. 7,650,184, entitled “Cylindrical Multi-Contact Electrode Lead for Neural Stimulation and Method of Making Same,” the disclosures of which are incorporated herein by reference. The electrodes116may provide an electrically conductive contact providing for an electrical interface between the IND112and tissue of the patient. The neurostimulation pulses are each delivered from the IND112through a set of electrodes selected from the electrodes116. In various examples, the neurostimulation pulses may include one or more individually defined pulses, and the set of electrodes may be individually definable by the user for each of the individually defined pulses. Although the discussion herein with regard to the neuromodulation system100focuses on an implantable device such as the IND112, this is meant only by way of example and not limitation. It is within the contemplation of the present inventors and within the scope of this document, that the systems, devices, and methods discussed herein may also be used for pain management via subcutaneous medical devices, wearable medical devices (e.g., wrist watches, patches, garment- or shoe-mounted devices, headgear, eye glasses, or earplugs), or other external medical devices, or a combination of implantable, wearable, or other external devices. The therapy, such as electrostimulation or medical therapies, may be used to treat various neurological disorders other than pain, which by way of example and not limitation may include epilepsy, migraine, Tourette's syndrome, obsessive compulsive disorder, tremor, Parkinson's disease, or dystonia, among other movement and affective disorders. The external system130may be communicated with the IND112via a communication link120. The external system130may include a dedicated hardware/software system such as a programmer, a remote server-based patient management system, or alternatively a system defined predominantly by software running on a standard personal computer. In some examples, at least a portion of the external system130may be ambulatory such as configured to be worn or carried by a subject. The external system130may be configured to control the operation of the IND112, such as to program the IND112for delivering neuromodulation therapy. The external system130may additionally receive via the communication link120information acquired by IND112, such as one or more physiological signals. In an example, the external system130may determine a pain score based on the physiological signals received from the IND112, and program the IND112to deliver pain therapy in a closed-loop fashion. Examples of the external system and neurostimulation based on pain score are discussed below, such as with reference toFIGS.2-3. The communication link120may include one or more communication channels and intermediate devices between the external system and the IND, such as a wired link, a telecommunication link such as an internet connection, or a wireless link such as one or more of an inductive telemetry link, a radio-frequency telemetry link. The communication link120may provide for data transmission between the IND112and the external system130. The transmitted data may include, for example, real-time physiological signals acquired by and stored in the IND112, therapy history data, data indicating device operational status of the IND112, one or more programming instructions to the IND112which may include configurations for sensing physiologic signal or stimulation commands and stimulation parameters, or device self-diagnostic test, among others. In some examples, the IND112may be coupled to the external system130further via an intermediate control device, such as a handheld external remote control device to remotely instruct the IND112to generate electrical stimulation pulses in accordance with selected stimulation parameters produced by the external system130, or to store the collected data into the external system130. Portions of the IND112or the external system130may be implemented using hardware, software, firmware, or combinations thereof. Portions of the IND112or the external system130may be implemented using an application-specific circuit that may be constructed or configured to perform one or more particular functions, or may be implemented using a general-purpose circuit that may be programmed or otherwise configured to perform one or more particular functions. Such a general-purpose circuit may include a microprocessor or a portion thereof, a microcontroller or a portion thereof, or a programmable logic circuit, or a portion thereof. For example, a “comparator” may include, among other things, an electronic circuit comparator that may be constructed to perform the specific function of a comparison between two signals or the comparator may be implemented as a portion of a general-purpose circuit that may be driven by a code instructing a portion of the general-purpose circuit to perform a comparison between the two signals. FIG.2illustrates, by way of example and not limitation, a block diagram of a pain management system200, which may be an embodiment of the neuromodulation system100. The pain management system200may assess pain of a subject using at least one physiological signal, and program a pain therapy based on the pain assessment. As illustrated inFIG.2, the pain management system200may include a sensor circuit210, a pain analyzer circuit220, a memory230, a user interface240, and a therapy unit250. The sensor circuit210may be coupled to one or more physiological sensors to sense from the patient at least one physiological signal. The sensor circuit210may include sense amplifier circuit that may pre-process the sensed physiological signals, including, for example, amplification, digitization, filtering, or other signal conditioning operations. Various physiological signals, such as cardiac, pulmonary, neural, or biochemical signals may demonstrate characteristic signal properties in response to an onset, intensity, severity, duration, or patterns of pain. In an example, the sensor circuit210may be coupled to implantable or wearable sensors to sense cardiac signals such as electrocardiograph (ECG), intracardiac electrogram, gyrocardiography, magnetocardiography, heart rate signal, heart rate variability signal, cardiovascular pressure signal, or heart sounds signal, among others. In another example, the sensor circuit210may sense pulmonary signals such as a respiratory signal, a thoracic impedance signal, or a respiratory sounds signal. In yet another example, the sensor circuit210may sense biochemical signals such as blood chemistry measurements or expression levels of one or more biomarkers, which may include, by way of example and not limitation, B-type natriuretic peptide (BNP) or N-terminal pro b-type natriuretic peptide (NT-proBNP), serum cytokine profiles, P2X4 receptor expression levels, gamma-aminobutyric acid (GABA) levels, TNFα and other inflammatory markers, cortisol, adenosine, Glial cell-derived neurotrophic factor (GDNF), Nav 1.3, Nav 1.7, or Tetrahydrobiopterin (BH4) levels, among other biomarkers. In an example, the sensor circuit210may sense at least one signal indicative of patient brain activity. The physiological sensor may be an ambulatory sensor, such as an implantable or wearable sensor associated with the patient, configured to sense brain electromagnetic activity. Alternatively, the physiological sensor may be a bedside monitor of brain electromagnetic activity. The signals sensed by the physiological sensors may include EEG, MEG, or a brain-evoked potential. Examples of sensors for sensing brain electromagnetic activities are discussed below, such as with reference toFIG.5. The pain analyzer circuit220may generate a pain score using at least the physiological signals received from the sensor circuit210. The pain analyzer circuit220may be implemented as a part of a microprocessor circuit, which may be a dedicated processor such as a digital signal processor, application specific integrated circuit (ASIC), microprocessor, or other type of processor for processing information including physical activity information. Alternatively, the microprocessor circuit may be a general purpose processor that may receive and execute a set of instructions of performing the functions, methods, or techniques described herein. The pain analyzer circuit220may include circuit sets comprising one or more other circuits or sub-circuits that may, alone or in combination, perform the functions, methods or techniques described herein. In an example, hardware of the circuit set may be immutably designed to carry out a specific operation (e.g., hardwired). In an example, the hardware of the circuit set may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation. In connecting the physical components, the underlying electrical properties of a hardware constituent are changed, for example, from an insulator to a conductor or vice versa. The instructions enable embedded hardware (e.g., the execution units or a loading mechanism) to create members of the circuit set in hardware via the variable connections to carry out portions of the specific operation when in operation. Accordingly, the computer readable medium is communicatively coupled to the other components of the circuit set member when the device is operating. In an example, any of the physical components may be used in more than one member of more than one circuit set. For example, under operation, execution units may be used in a first circuit of a first circuit set at one point in time and reused by a second circuit in the first circuit set, or by a third circuit in a second circuit set at a different time. As illustrated inFIG.2, the pain analyzer circuit220may include a signal metrics generator221and a pain score generator225. The signal metrics generator221may generate one or more brain activity signal metrics222from the sensed at least one physiological signal. The signal metrics may include temporal or spatial parameters, statistical parameters, morphological parameters, and spectral parameters extracted from the signal transformed into the frequency domain or other transformed domain. In an example where the sensed physiological signal includes one or more EEG, MEG, or a brain-evoked potential, the signal metrics may be indicative of strength or a pattern of brain electromagnetic activity associated with pain. Examples of the signal metrics for pain quantification are discussed below, such as with reference toFIG.5. The pain score generator225may generate a pain score using the measurements of the signal metrics generated by the signal metrics generator221. The pain score can be represented as a numerical or categorical value that quantifies the patient overall pain symptom. In an example, a composite signal metric may be generated using a combination of a plurality of the signal metrics respectively weighted by weight factors. The combination can be linear or nonlinear. The pain score generator225may compare the composite signal metric to one or more threshold values or range values, and assign a corresponding pain score (such as numerical values from 0 to 10) based on the comparison. In another example, the pain score generator225may compare the signal metrics to their respective threshold values or range values, assign corresponding signal metric-specific pain scores based on the comparison, and compute a composite pain score using a linear or nonlinear fusion of the signal metric-specific pain scores weighted by their respective weight factors. In an example, the threshold can be inversely proportional to signal metric's sensitivity to pain. A signal metric that is more sensitive to pain may have a corresponding lower threshold and a larger metric-specific pain score, thus plays a more dominant role in the composite pain score than another signal metric that is less sensitive to pain. Examples of the fusion algorithm may include weighted averages, voting, decision trees, or neural networks, among others. The pain score generated by the pain score generator225may be output to a system user or a process. In various examples, in addition to the physiological signals such as the brain electromagnetic activity signals, the sensor circuit210may sense one or more functional signals from the patient. Examples of the functional signals may include, but not limited to, patient posture, gait, balance, or physical activity signals, among others. The sensor circuit210may sense the functional signals via one or more implantable or wearable motion sensors, including an accelerometer, a gyroscope (which may be a one-, two-, or three-axis gyroscope), a magnetometer (e.g., a compass), an inclinometer, a goniometer, an electromagnetic tracking system (ETS), or a global positioning system (GPS) sensor, among others. Detailed description of functional signals for use in pain characterization are disclosed in commonly assigned U.S. Provisional Patent Application Ser. No. 62/445,075, entitled “PAIN MANAGEMENT BASED ON FUNCTIONAL MEASUREMENTS”, the disclosures of which are incorporated herein by reference. The signal metrics generator221may generate functional signal metrics from the functional signals, and the pain score generator225may determine the pain score using a linear or nonlinear combination of the muscle tension signal metrics and the functional signal metrics. Commonly assigned U.S. Provisional Patent Application Ser. No. 62/445,053, entitled “PAIN MANAGEMENT BASED ON CARDIOVASCULAR PARAMETERS” describes cardiovascular parameters such as arterial pulsatile activity and electrocardiography for use in pain analysis, the disclosure of which is incorporated herein by reference in its entirety. Commonly assigned U.S. Provisional Patent Application Ser. No. 62/445,061, entitled “PAIN MANAGEMENT BASED ON BRAIN ACTIVITY MONITORING” describes information of brain activity for use in pain analysis, the disclosure of which is incorporated herein by reference in its entirety. Commonly assigned U.S. Provisional Patent Application Ser. No. 62/445,069, entitled “PAIN MANAGEMENT BASED ON RESPIRATION-MEDIATED HEART RATES” describes information of respiration-mediated heart rate for use in pain analysis, the disclosure of which is incorporated herein by reference in its entirety. Commonly assigned U.S. Provisional Patent Application Ser. No. 62/445,082, entitled “PAIN MANAGEMENT BASED ON EMOTIONAL EXPRESSION MEASUREMENTS” describes measurements of patient emotional expressions for use in pain analysis, the disclosure of which is incorporated herein by reference in its entirety. Commonly assigned U.S. Provisional Patent Application Ser. No. 62/445,092, entitled “PAIN MANAGEMENT BASED ON MUSCLE TENSION MEASUREMENTS” describes measurements of patient muscle tension including electromyography for use in pain analysis, the disclosure of which is incorporated herein by reference in its entirety. One or more of these additional signals or measurements may be used by the pain analyzer circuit220to generate a pain score. The memory230may be configured to store sensor signals or signal metrics such as generated by the sensor circuit210and the signal metrics generator221, and the pain scores such as generated by the pain score generator225. Data may be stored at the memory230continuously, periodically, or triggered by a user command or a specific event. In an example, as illustrated inFIG.2, the memory230may store weight factors, which may be used by the pain score generator225to generate the composite pain score. The weight factors may be provided by a system user, or alternatively be automatically determined or adjusted such as based on the corresponding signal metrics' reliability in representing an intensity of the pain. Examples of the automatic weight factor generation are discussed below, such as with reference toFIG.3. The user interface240may include an input circuit241and an output unit242. In an example, at least a portion of the user interface240may be implemented in the external system130. The input circuit241may enable a system user to program the parameters used for sensing the physiological signals, generating signal metrics, or generating the pain score. The input circuit241may be coupled to one or more input devices such as a keyboard, on-screen keyboard, mouse, trackball, touchpad, touch-screen, or other pointing or navigating devices. In some example, the input device may be incorporated in a mobile device such as a smart phone or other portable electronic device configured to execute a mobile application (“App”). The mobile App may enable a patient to provide pain description or quantified pain scales during the pain episodes. In an example, the input circuit241may enable a user to confirm, reject, or edit the programming of the therapy unit250, such as parameters for electrostimulation, as to be discussed in the following. The output unit242may include a display to present to a system user such as a clinician the pain score. The output unit242may also display information including the physiological signals, trends of the signal metric, or any intermediary results for pain score calculation such as the signal metric-specific pain scores. The information may be presented in a table, a chart, a diagram, or any other types of textual, tabular, or graphical presentation formats, for displaying to a system user. The presentation of the output information may include audio or other human-perceptible media format. In an example, the output unit242may generate alerts, alarms, emergency calls, or other forms of warnings to signal the system user about the pain score. The therapy circuit250may be configured to deliver a therapy to the patient based on the pain score generated by the pain score generator225. The therapy circuit250may include an electrostimulator configured to generate electrostimulation energy to treat pain. In an example, the electrostimulator may deliver spinal cord stimulation (SCS) via electrodes electrically coupled to the electrostimulator. The electrodes may be surgically placed at a region at or near a spinal cord tissue, which may include, by way of example and not limitation, dorsal column, dorsal horn, spinal nerve roots such as the dorsal nerve root, dorsal root entry zone, spinothalamic tract, and dorsal root ganglia. The SCS may be in a form of stimulation pulses that are characterized by pulse amplitude, pulse width, stimulation frequency, duration, on-off cycle, pulse shape or waveform, temporal pattern of the stimulation, among other stimulation parameters. Examples of the stimulation pattern may include burst stimulation with substantially identical inter-pulse intervals, or ramp stimulation with incremental inter-pulse intervals or with decremental inter-pulse intervals. In some examples, the frequency or the pulse width may change from pulse to pulse. The electrostimulator may additionally or alternatively deliver electrostimulation to other target tissues such as peripheral nerves tissues. In an example, the electrostimulator may deliver transcutaneous electrical nerve stimulation (TENS) via detachable electrodes that are affixed to the skin. The therapy circuit250may additionally or alternatively include a drug delivery system, such as an intrathecal drug delivery pump that may be surgically placed under the skin, which may be programmed to inject medication or biologics through a catheter to the area around the spinal cord. Other examples of drug delivery system may include a computerized patient-controlled analgesia pump that may deliver the prescribed pain medication to the patient such as via an intravenous line. In some examples, the therapy circuit250may be delivered according to the pain score received from the pain score generator225. FIG.3illustrates, by way of example and not limitation, a block diagram of another example of a pain management system300, which may be an embodiment of the neuromodulation system100or the pain management system200. The pain management system300may include an implantable neuromodulator310and an external system320, which may be, respectively, embodiments of the IND112and the external system130as illustrated inFIG.1. Examples of the implantable neuromodulator310may include an implantable pulse generator (IPG) for providing SCS therapy, an IPG for providing DBS therapy, or an IPG for providing peripheral nerve stimulation (PNS) therapy. The external system320may be communicatively coupled to the implantable neuromodulator310via the communication link120. The implantable neuromodulator310may include several components of the pain management system200as illustrated inFIG.2, including the sensor circuit210, the pain analyzer circuit220, the memory230, and the therapy unit250. The sensor circuit210may be communicatively coupled, via a wired or wireless connection, to one or more implantable or wearable sensors configured to sense brain electromagnetic activities such as EEG signals. The EEG signals may be recorded from multiple electrodes placed on the scalp. In some examples, the EEG signals may include intracranial EEG, also known as electrocorticography (ECoG), by using an array of electrodes positioned directly on the cortical surface of the brain to record electrical activity from the cerebral cortex. Examples of the sensors for sensing EEG signals are discussed below with reference toFIG.5. As discussed with reference toFIG.2, the pain analyzer circuit220includes the pain score generator225that determines a pain score using weight factors stored in the memory230and the signal metrics from the signal metrics generator221which may also be included in the pain analyzer circuit220. The implantable neuromodulator310may include a controller circuit312, coupled to the therapy unit250, that controls the generation and delivery of pain therapy, such as neurostimulation energy. The controller circuit312may control the generation of electrostimulation pulses according to specific stimulation parameters. The stimulation parameters may be provided by a system user. Alternatively, the stimulation parameters may be automatically determined based on the intensity, severity, duration, or pattern of pain, which may be subjectively described by the patient or automatically quantified based on the physiological signals sensed by the sensor circuit210. For example, when a patient-described or sensor-indicated quantification exceeds a respective threshold value or falls within a specific range indicating elevated pain, the electrostimulation energy may be increased to provide stronger pain relief. Increased electrostimulation energy may be achieved by programming a higher pulse intensity, a higher frequency, or a longer stimulation duration or “on” cycle, among others. Conversely, when a patient-described or sensor-indicated pain quantification falls below a respective threshold value or falls within a specific range indicating no pain or mild pain, the electrostimulation energy may be decreased. The controller circuit312may also adjust stimulation parameters to alleviate side effects introduced by the electrostimulation of the target tissue. Additionally or alternatively, the controller circuit312may control the therapy unit250to deliver electrostimulation pulses via specific electrodes. In an example of pain management via SCS, a plurality of segmented electrodes, such as the electrodes116, may be distributed in one or more leads. The controller circuit312may configure the therapy unit250to deliver electrostimulation pulses via a set of electrodes selected from the plurality of electrodes. The electrodes may be manually selected by a system user or automatically selected based on the pain score. Examples of selecting electrodes for electrostimulation based on the pain score are discussed below, such as with reference toFIGS.4A-B. The implantable neuromodulator310may receive the information about electrostimulation parameters and the electrode configuration from the external system320via the communication link120. Additional parameters associated with operation of the therapy unit250, such as battery status, lead impedance and integrity, or device diagnostic of the implantable neuromodulator310, may be transmitted to the external system320. The controller circuit312may control the generation and delivery of electrostimulation using the information about electrostimulation parameters and the electrode configuration from the external system320. Examples of the electrostimulation parameters and electrode configuration may include: temporal modulation parameters such as pulse amplitude, pulse width, pulse rate, or burst intensity; morphological modulation parameters respectively defining one or more portions of stimulation waveform morphology such as amplitude of different phases or pulses included in a stimulation burst; or spatial modulation parameters such as selection of active electrodes, electrode combinations which define the electrodes that are activated as anodes (positive), cathodes (negative), and turned off (zero), and stimulation energy fractionalization which defines amount of current, voltage, or energy assigned to each active electrode and thereby determines spatial distribution of the modulation field. In an example, the controller circuit312may control the generation and delivery of electrostimulation in a closed-loop fashion by adaptively adjusting one or more stimulation parameters or stimulation electrode configuration based on the pain score. For example, if the score exceeds the pain threshold (or falls within a specific range indicating an elevated pain), then the first electrostimulation may be delivered. Conversely, if the composite pain score falls below a respective threshold value (or falls within a specific range indicating no pain or mild pain), then a second pain therapy, such as second electrostimulation may be delivered. The first and second electrostimulations may differ in at least one of the stimulation energy, pulse amplitude, pulse width, stimulation frequency, duration, on-off cycle, pulse shape or waveform, electrostimulation pattern such as electrode configuration or energy fractionalization among active electrodes, among other stimulation parameters. In an example, the first electrostimulation may have higher energy than the second electrostimulation, such as to provide stronger effect of pain relief. Examples of increased electrostimulation energy may include a higher pulse intensity, a higher frequency, and a longer stimulation duration or “on” cycle, among others. The parameter adjustment or stimulation electrode configuration may be executed continuously, periodically at specific time, duration, or frequency, or in a commanded mode upon receiving from a system user a command or confirmation of parameter adjustment. In some examples, the closed-loop control of the electrostimulation may be further based on the type of the pain, such as chronic or acute pain. In an example, the pain analyzer circuit220may trend the signal metric over time to compute an indication of abruptness of change of the signal metrics, such as a rate of change over a specific time period. The pain episode may be characterized as acute pain if the signal metric changes abruptly (e.g., the rate of change of the signal metric exceeding a threshold), or as chronic pain if the signal metric changes gradually (e.g., the rate of change of the signal metric falling below a threshold). The controller circuit312may control the therapy unit250to deliver, withhold, or otherwise modify the pain therapy in accordance with the pain type. For example, incidents such as toe stubbing or bodily injuries may cause abrupt changes in certain signal metrics, but no adjustment of the closed-loop pain therapy is deemed necessary. On the contrary, if the pain analyzer circuit220detects chronic pain characterized by gradual signal metric change, then the closed-loop pain therapy may be delivered accordingly. The adaptive adjustment of stimulation parameters or stimulation electrode based on the pain score as discussed above may be based on paresthesia effect, that is, patient perception of stimulation and its effect on pain. The adaptive adjustment may provide desired paresthesia coverage while minimizing patient comfort and/or energy usage. In some examples, the controller circuit312may adjust stimulation parameters or stimulation electrode for sub-perception stimulation (e.g., sub-perception SCS) using the sensed brain activity. In contrast to supra-perception stimulation where paresthesia may be readily felt by the patient, sub-perception stimulation may take several hours or over a day before a patient may be able to assess the therapeutic effect of the stimulation. Electrode location or other stimulation parameters may be varied, while the pain analyzer circuit220may monitor the brain activity for indicators that predict stimulation efficacy, such as based on a comparison to the brain activity signal template representative of effective prevention of pain sensation. Even though the pain might not be reduced yet by stimulation, the brain activity may show early indications that predict the therapeutic effect of pain relief. The external system320may include the user interface240, a weight generator322, and a programmer circuit324. The weight generator322may generate weight factors used by the pain score generator225to generate the pain score. The weight factors may indicate the signal metrics' reliability in representing an intensity of the pain. A sensor metric that is more reliable, or more sensitive or specific to the pain, would be assigned a larger weight than another sensor metric that is less reliable, or less sensitive or specific to the pain. In an example, the weight factors may be proportional to correlations between a plurality of quantified pain scales (such as reported by a patient) and measurements of the measurements of the signal metrics corresponding to the plurality of quantified pain scales. A signal metric that correlates with the pain scales is deemed a more reliable signal metric for pain quantification, and is assigned a larger weight factor than another signal metric less correlated with the quantified pain scales. In another example, the weight generator322may determine weight factors using the signal sensitivity to pain. The signal metrics may be trended over time, such as over approximately six months. The signal sensitivity to pain may be represented by a rate of change of the signal metrics over time during a pain episode. The signal sensitivity to pain may be evaluated under a controlled condition such as when the patient posture or activity is at a specific level or during specific time of the day. The weight generator322may determine weight factors to be proportional to signal metric's sensitivity to pain. The programmer circuit324may produce parameter values for operating the implantable neuromodulator310, including parameters for sensing physiological signals and generating signal metrics, and parameters or electrode configurations for electrostimulation. In an example, the programmer circuit324may generate the stimulation parameters or electrode configurations for SCS based on the pain score produced by the pain score generator225. Through the communication link120, the programmer circuit324may continuously or periodically provide adjusted stimulation parameters or electrode configuration to the implantable neuromodulator210. By way of non-limiting example and as illustrated inFIG.3, the programmer circuit324may be coupled to the user interface234to allow a user to confirm, reject, or edit the stimulation parameters, sensing parameters, or other parameters controlling the operation of the implantable neuromodulator210. The programmer circuit324may also adjust the stimulation parameter or electrode configuration in a commanded mode upon receiving from a system user a command or confirmation of parameter adjustment. The programmer circuit324, which may be coupled to the weight generator322, may initiate a transmission of the weight factors generated by the weight generator322to the implantable neuromodulator310, and store the weight factors in the memory230. In an example, the weight factors received from the external system320may be compared to previously stored weight factors in the memory230. The controller circuit312may update the weight factors stored in the memory230if the received weight factors are different than the stored weights. The pain analyzer circuit220may use the updated weight factors to generate a pain score. In an example, the update of the stored weight factors may be performed continuously, periodically, or in a commanded mode upon receiving a command from a user. In various examples, weight factors may be updated using a fusion model. Commonly assigned U.S. Provisional Patent Application Ser. No. 62/445,095, entitled “PATIENT-SPECIFIC CALIBRATION OF PAIN QUANTIFICATION” describes systems and methods for calibrating a fusion model, such as adjusting weights for signal metrics, using a reference pain quantification, the disclosure of which is incorporated herein by reference in its entirety. In some examples, the pain score may be used by a therapy unit (such as an electrostimulator) separated from the pain management system300. In various examples, the pain management system300may be configured as a monitoring system for pain characterization and quantification without delivering closed-loop electrostimulation or other modalities of pain therapy. The pain characterization and quantification may be provided to a system user such as the patient or a clinician, or to a process including, for example, an instance of a computer program executable in a microprocessor. In an example, the process includes computer-implemented generation of recommendations or an alert to the system user regarding pain medication (e.g., medication dosage and time for taking a dose), electrostimulation therapy, or other pain management regimens. The therapy recommendations or alert may be based on the pain score, and may be presented to the patient or the clinician in various settings including in-office assessments (e.g. spinal cord stimulation programming optimization), in-hospital monitoring (e.g. opioid dosing during surgery), or ambulatory monitoring (e.g. pharmaceutical dosing recommendations). In an example, in response to the pain score exceeding a threshold which indicates elevated pain symptom, an alert may be generated and presented at the user interface240to remind the patient to take pain medication. In another example, therapy recommendations or alerts may be based on information about wearing-off effect of pain medication, which may be stored in the memory230or received from the user interface240. When the drug effect has worn off, an alert may be generated to remind the patient to take another dose or to request a clinician review of the pain prescription. In yet another example, before a pain therapy such as neurostimulation therapy is adjusted (such as based on the pain score) and delivered to the patient, an alert may be generated to forewarn the patient or the clinician of any impending adverse events. This may be useful as some pain medication may have fatal or debilitating side effects. In some examples, the pain management system300may identify effect of pain medication addiction such as based on patient physiological or functional signals. An alert may be generated to warn the patient about effects of medication addiction and thus allow medical intervention. In some examples, the pain analyzer circuit220may be alternatively included in the external system320. The pain analyzer circuit220, or a portion of the pain analyzer circuit220such as the signal metrics generator221or the pain score generator225, may be included in a wearable device configured to be worn or carried by a subject. At least a portion of the sensor circuit210may also be included in the external system320, such that the physiological signal indicative of brain electromagnetic activities that are sensed by one or more physiological sensors (e.g., ambulatory EEG sensors or bedside EEG sensors) may be transmitted to the external system320for processing, and generating the pain score based on the processed brain electromagnetic activity signals. A clinician may use the external system320to program the implantable neuromodulator310with appropriate pain therapy based on the pain score generated at the external system320, such as during a clinical trial or patient follow-up visit at the clinic. FIGS.4A-Billustrate, by way of example and not limitation, block diagrams of portions of a system for selecting active electrodes for delivering pain-relief electrostimulation energy based on the pain score.FIG.4Aillustrates an IPG410operably coupled to two neuromodulation leads420A-B via a header412. The IPG410can be an embodiment of the IPG110as shown inFIG.1. The IPG410includes a can housing411that encloses circuitry and other components for sensing physiological signals, delivering electrostimulations, and controlling other device operations. The neuromodulation leads420A-B each includes a plurality of electrodes430axially disposed an elongated cylindrical lead body. The electrodes430may be used for delivering neuromodulation of a specific target tissue, such as SCS at a spinal cord region, DBS at a brain region, or PNS at or next to a peripheral nerve. The electrodes430may take the form of column electrodes (or ring electrodes) or circumferentially segmented electrodes with specified electrode size, shape, and inter-electrode spacing along the length of the respective lead body. By way of example and not limitation, the lead420A may carry electrodes E1-E8, and the lead420B may carry electrodes E9-E16. In some examples, at least some of the electrodes430may also be coupled to a sensor circuit to sense tissue electrical activity, such as brain activity or neural activity at or near the spinal cord. FIG.4Billustrates a diagram of electrode selection for delivering pain-relief electrostimulation from a plurality of candidate electrodes such as the electrodes430on one or both of the neuromodulation leads420A-B. The electrode selection may be performed using the pain management system200or300. The electrode selection may be based on relative pain-reduction effects when electrostimulation energy is delivered according to configurations involving one or more of the candidate electrodes. The pain episode may include spontaneous pain experienced in patient daily life. Alternatively, a pain episode may be induced such as in a clinic and administered by a clinician. In an example, pain may be induced by delivering electrostimulation energy according to a pre-determined stimulation protocol. The pre-determined stimulation protocol may include a plurality of electrode configurations arranged in a specified order. Each electrode configuration may include a designation of an anode and a cathode, each selected from the candidate electrodes (such as some or all of the electrodes430) and a reference electrode such as the device can housing411. In an example, the electrode configuration includes a unipolar configuration with one of the candidate electrodes (such as E1-E16) designated as a cathode and the device can housing411as an anode. In another example, the electrode configuration includes a bipolar configuration with one of the candidate electrodes (such as E1-E16) designated as a cathode and another candidate electrode, different than the cathode, as an anode. In some examples, pain may be induced by temporarily withholding pain-relief therapy (such as electrostimulation) or varying therapy dosage to achieve intermediate levels of pain reduction effect. Additionally or alternatively, pain induction procedure may include applying heat, pressure, or other artificial stimuli during quantitative sensory testing, administering nerve block or adjusting pharmaceutical agents, psychological or stress stimulation, or physical exercise such as strenuous leg lift or grip test, among others. A pain assessment session may be initiated to analyze patient perception and physiological responses to the spontaneous or induced pain episodes. The pain assessment session may be automatically triggered by a sensor indicator, or activated manually by the patient (such as during a spontaneous pain episode) or a clinician (such as during an induced pain episode). The pain assessment session may include evaluating the electrostimulation's pain-relief effect. During the pain assessment session, physiological signals indicative of patient brain activity, such as an EEG signal, may be recorded during the pain-relief electrostimulation according to each of the electrode configurations in the pre-determined stimulation protocol, and analyzed such as using the pain management system200or300, l. A plurality of EEG parameters may be extracted from the sensed EEG signal, such as using the signal metrics generator221. By way of example and not limitation, a pain score report451includes metric-specific pain scores corresponding to pain-relief electrostimulation applied according to an electrode configuration with electrode E1as a cathode and the can housing411as an anode. The metric-specific pain scores may be determined by comparing the respective signal metrics, i.e., the EEG parameters, to their respective threshold values. A positive indicator “+”, or a metric-specific numerical score of “1”, is assigned for an EEG parameter if that EEG parameter exceeds its respective threshold value, indicating pain persistence or undesirable pain reduction. Conversely, a negative indicator “−”, or a metric-specific numerical score of “0”, is assigned for an EEG parameter if that EEG parameter falls below its respective threshold value and indicates no pain or desirable pain reduction. A composite pain score may be computed using a combination of the metric-specific pain scores corresponding to the EEG parameters evaluated. In an example, the composite pain score may be computed as a sum or weighted sum of the metric-specific pain scores. In the illustrated example inFIG.4B, a total score of “2” is obtained for the electrode configuration involving electrode E1. The above illustrated process may similarly be performed for other electrode configurations, which may result in a pain score report452with a composite pain score of “1” for electrode configuration involving electrode E2, another pain score report453with a composite pain score of “0” for electrode configuration involving electrode E3, and so on. The composite pain scores associated with the electrode configurations included in the stimulation protocol may be presented to the patient or a clinician, such as in a form of a table460. In lieu of or in addition to the numerical pain scores, graphical representations, such as a colored bar representing the composite pain scores, may be included in the table460. In the example illustrated inFIG.4B, the electrode E5corresponds to a pain score of “4”, which is the highest among the tested electrodes E1-E8, indicating the least effectiveness in pain reduction compared to pain-relief electrostimulation delivered according to electrode configurations involving other electrodes different from electrode E5. The electrodes E3, E4and E6each corresponds to a pain score of “0”, the lowest among the tested electrodes E1-E8, indicating the highest effectiveness in pain reduction. As such, in a closed-loop pain therapy or clinician programmed pain therapy, the electrode E5may be excluded, and at least one of the electrodes E3, E4or E6may be selected as active electrodes (such as cathodes) for delivering electrostimulation energy. In some examples, as an alternative of the metric-specific pain score, a metric-specific pain reduction score may be determined for each EEG parameter. A pain reduction score of “1” is assigned if the EEG parameter indicates pain relief (or desirable pain reduction) and a pain reduction score of “0” is assigned if the EEG parameter indicates pain persistence (or undesirable pain reduction). A composite pain reduction score may be computed using a combination of the metric-specific pain reduction scores. One or more electrodes that correspond to the highest composite pain reduction score among the tested electrodes E1-E8indicate the highest effectiveness in pain reduction, and may be selected as active electrodes (such as cathodes) for delivering electrostimulation energy. The above-discussed electrode selection based on pain scores associated with EEG parameters may be modified for selecting, or determining values of, one or more other therapy parameters, including: electrode energy fractionalization which defines amount of current, voltage, or energy assigned to each active electrode and thereby determines spatial distribution of the modulation field; temporal modulation parameters such as pulse amplitude, pulse width, pulse rate, or burst intensity; morphological modulation parameters respectively defining one or more portions of stimulation waveform morphology such as amplitude of different phases or pulses included in a stimulation burst, among others. The disclosed method may also be used in selecting one or more active therapy regimes from a plurality of candidate therapy regimes each involving a combination of multiple therapy parameters such as electrode selection, energy fractionalization, waveform temporal and morphological parameters. For example, in an automated closed-loop pain therapy or clinician programmed pain therapy, a particular value for a specific therapy parameter, or a particular therapy regime, may be selected and programmed to the IPG411for delivering electrostimulation therapy to relieve patient pain. FIG.5illustrates, by way of example and not limitation, a block diagram of a portion of the system for sensing brain electromagnetic activities such as an EEG and generating EEG parameters for pain quantification. The EEG parameters may be used by the pain management system200or300to characterize and quantify patient pain. The system portion may include one or more EEG sensors501through504, the sensor circuit210, and an EEG parameter generator520which is an embodiment of the signal metrics generator221. One or more types of EEG sensors may be used to sense the EEG signals. According to the manner of interaction with the patient, the EEG sensors may include, by way of example and not limitation, one or more of a wearable wired EEG sensor501, a wearable wireless EEG sensor502, an implantable lead-based EEG sensor503, or an implantable wireless EEG sensors504. The wearable wired EEG sensor501may be worn on a patient head and connected to a bedside stationary EEG monitor. An example of the wearable wired EEG sensor501may include an EEG cap with scalp electrodes mounted thereon such as according to the international 10-20 system. The wearable wireless EEG sensor502may be mounted on a removable headwear, such as a cap, a hat, a headband, or eye glasses, among others. Alternatively, the wearable wireless EEG sensor502may be mounted on a removable accessory such as an earpiece, an ear plug, or an ear patch. The earpiece may be personalized to allow tight fit within patient concha and ear cannel and secure electrode-tissue contact. Alternatively, the electrodes may be placed close to the ear such as hidden behind the ear lobe. The implantable lead-based EEG sensors503may include electrodes disposed on an implantable lead configured to be positioned on a target tissue site for therapeutic electrostimulation, such as a lead configured to be implanted in patient brain for DBS, or a lead implanted at a head location to provide occipital or trigeminal PNS. The electrodes may not only be used to provide electrostimulation energy at the implanted sites to treat pain, but can also be coupled to a sensor circuit to sense brain activity such as an EEG. An example of the implantable-lead based EEG sensors is illustrated inFIG.4A. The implantable wireless EEG sensor504may be subcutaneously implanted at a head location to sense an EEG signal. The wearable wireless EEG sensor502and the implantable wireless EEG sensor504may each include a transmitter circuit configured for transmitting the sensed EEG signal to the sensor circuit210or the IPG411via a wireless communication link, such as a Bluetooth protocol, an inductive telemetry link, a radio-frequency telemetry link, Ethernet, or IEEE 802.11 wireless, among others. The sensor circuit210may be communicatively coupled to the one or more EEG sensors501-504via a wired or wireless connection. The sensor circuit210may include sense amplifier circuit that may pre-process the sensed EEG signal. From the processed physiological signals, the EEG parameter generator520may extract one or more EEG parameters. In an example, at least a part of the sensor circuit210or the EEG parameter generator520may be implemented in, and executed by, a mobile device. Examples of the mobile device may include a smart phone, a wearable device, a fitness band, a portable health monitor, a tablet, a laptop computer, or other types of portable computerized device. Alternatively, at least a part of the sensor circuit210or the EEG parameter generator520may be included in a wearable device incorporating signal processing circuitry to analyzing the EEG signals and generating pain scores. The wearable device may be worn or otherwise associated on the wrist, arm, upper or lower leg, trunk, or other body part suitable for a tight or loose belt-band containing the wearable, or located inside a wallet, a purse, or other handheld accessories. The EEG parameter generator520may generate one or more EEG parameters from the sensed EEG signal. By way of example and not limitation, the EEG parameters may include timing parameters, temporal statistical parameters, morphology parameters, and spectral parameters. Examples of the timing parameters may include a time interval between a first characteristic point in one signal and a second characteristic point in another signal. Examples of the statistical parameters may include signal mean, median, or other central tendency measures or a histogram of the signal intensity, variance, standard deviation, or higher-order statistics, among others. Examples of the morphological parameters may include maximum or minimum within a specific time period such as a cardiac cycle, positive or negative slope, among others. In some examples, the sensor circuit210may perform signal transformation on the sensed EEG signal, such as a Fourier transform or wavelet transform. One or more signal metrics may be extracted from the transformed EEG signals, which may include signal power spectra at specific frequency bands, dominant frequency, coherence, spectral entropy, mutual information, frequency shift of spectral peaks, spectral width or a Q-factor of power spectra, or other features extracted from the frequency domain or other transformed domain. In an example, multiple epochs of EEG recordings, each having a specified duration, may be collected. The sensor circuit210may include a filter bank comprising filters with respective characteristics such as passbands and center frequencies. In an example, each epoch of EEG recording may be filtered through the filter bank to obtain one or more of: a delta wave within a frequency band of approximately 1-4 Hertz (Hz), a theta wave within a frequency band of approximately 4-7 Hz, an alpha wave within a frequency band of approximately 8-15 Hz, or a beta wave within a frequency band of approximately 16-30 Hz, among others. The EEG parameters may include power spectra, dominant frequency, or other spectral parameters of these distinct EEG waves at distinct frequency bands averaged over the multiple epochs. In some examples, EEG signals may be collected from various brain regions of interest, which may include frontal, central, parietal, occipital, and temporal regions. The EEG parameters may include power spectra, dominant frequency, or other spectral parameters of the distinct EEG waves corresponding to different brain regions of interest. The pain score generator225may generate pain score at least based on the EEG parameters. FIG.6illustrates, by way of example and not limitation, a method600for managing pain of a patient. The method600may be implemented in a medical system, such as the pain management system200or300. In an example, at least a portion of the method600may be executed by a neuromodulator device (IND) such as the implantable neuromodulator310. In an example, at least a portion of the method600may be executed by an external programmer or remote server-based patient management system, such as the external system320that are communicatively coupled to the IND. The method600may be used to provide neuromodulation therapy to treat chronic pain or other disorders. The method600begins at step610, where at least one physiological signal indicative of patient brain activity may be sensed from the patient, such as using the sensor circuit210. Examples of the brain activity signal may include electroencephalography (EEG), magnetoencephalography (MEG), or a brain-evoked potential, among other brain electromagnetic signals. The brain activity signals may be associated with patient pain vulnerability to experience chronic pain. Therefore, monitoring of patient brain electromagnetic activity may provide an objective assessment of pain. The brain activity signal may be sensed via an implantable or wearable sensor associated with the patient, such as one or more of the EEG sensors501-504as illustrated inFIG.5for sensing EEG signals using implantable electrodes or sensors, or non-invasive surface electrodes or sensors. In some examples, the EEG signals may be collected from various brain regions of interest, which may include frontal, central, parietal, occipital, and temporal regions. The brain activity signal may alternatively be sensed via a bedside monitor such as an EEG monitor. In various examples, other physiological signals may additionally be sensed at610, including, for example, cardiac, pulmonary, neural, or biochemical signals each having characteristic signal properties indicative of onset, intensity, severity, duration, or patterns of pain. In some examples, in addition to the brain activity signals and other physiological signals, one or more functional signals may be sensed at610, such as via one or more implantable or wearable motion sensors. Examples of the functional signals may include patient posture, gait, balance, or physical activity signals, among others. The functional signals may be used together with the brain activity signal in assessing patient pain. At620, one or more signal metrics may be generated from the sensed one or more brain activity signals. The signal metrics may include temporal or spatial parameters, statistical parameters, or morphological parameters. In an example where the sensed physiological signal includes one or more EEG, MEG, or a brain-evoked potential, the signal metrics may be indicative of strength or a pattern of brain electromagnetic activity associated with pain. In some example, the sensed at least one brain activity signal may be processed by applying a signal transformation such as Fourier transform or wavelet transform. One or more signal metrics may be extracted from the transformed signals, such as signal power spectra at specific frequency bands, dominant frequency, coherence, spectral entropy, mutual information, frequency shift of spectral peaks, spectral width or a Q-factor of power spectra, or other features. In an example, the signal metrics may include one or more of EEG timing parameters, EEG temporal statistical parameters, EEG morphology parameters, or EEG power spectral parameters, as illustrated inFIG.5. The EEG power spectra at a plurality of frequency bands correspond to distinct EEG components, including a delta wave at approximately 1-4 Hz, a theta wave at approximately 4-7 Hz, an alpha wave at approximately 8-15 Hz, or a beta wave at approximately 16-30 Hz, among others. In some examples, the EEG parameters may include respective spectral parameters corresponding to different brain regions of interest. At630, a pain score may be generated using the measurements of the signal metrics indicative of brain electromagnetic activity. The pain score may be represented as a numerical or categorical value that quantifies overall pain quality in the subject. In an example, a composite signal metric may be generated using a combination of the signal metrics weighted by their respective weight factors. The composite signal metric may be categorized as one of a number of degrees of pain by comparing the composite signal metric to one or more threshold values or range values, and a corresponding pain score may be assigned based on the comparison. In another example, the signal metrics may be compared to their respective threshold values or range values and a corresponding signal metric-specific pain score may be determined. A composite pain score may be generated using a linear or nonlinear fusion of the signal metric-specific pain scores each weighted by their respective weight factors. In some examples, the pain score may be computed using a subset of the signal metrics selected based on their temporal profile of pain response. Signal metrics with quick pain response (or a shorter transient state of response) may be selected to compute the pain score during a pain episode. Signal metrics with slow or delayed pain response (or a longer transient state of response before reaching a steady state) may be used to compute the pain score after an extended period following the onset of pain such as to allow the signal metrics to reach steady state of response. In some examples, patient demographic information such as patient age or gender may be used in computing the pain score. A higher pain threshold for the composite signal metric may be selected for male patients than for female patients. Additionally or alternatively, the respective weight factors may be determined based on patient demographic information. The weight factors for the signal metrics may be tuned to a lower value than the weight factors for the same signal metric in a female patient. At642, the pain score may be output to a user or to a process, such as via the output unit242as illustrated inFIG.2. The pain score, including the composite pain score and optionally together with metric-specific pain scores, may be displayed on a display screen. Other information such as the brain activity signals and the signal metrics extracted from the brain activity signals may also be output for display or for further processing. In some examples, alerts, alarms, emergency calls, or other forms of warnings may be generated to signal the system user about occurrence of a pain episode or aggravation of pain as indicated by the pain score. The method600may include, at644, an additional step of delivering a pain therapy to the patient according to the pain score. The pain therapy may include electrostimulation therapy, such as spinal cord stimulation (SCS) via electrodes electrically coupled to the electrostimulator. The SCS may be in a form of stimulation pulses that are characterized by pulse amplitude, pulse width, stimulation frequency, duration, on-off cycle, waveform, among other stimulation parameters. Other electrostimulation therapy, such as one or a combination of DBS, FES, VNS, TNS, or PNS at various locations, may be delivered for pain management. The pain therapy may additionally or alternatively include a drug therapy such as delivered by using an intrathecal drug delivery pump. In various examples, the pain therapy (such as in the form of electrostimulation or drug therapy) may be delivered in a closed-loop fashion. Therapy parameters, such as stimulation waveform parameters, stimulation electrode combination and fractionalization, drug dosage, may be adaptively adjusted based at least on the pain score. The pain-relief effect of the delivered pain therapy may be assessed based on the signal metrics such as the cardiovascular parameters, and the therapy may be adjusted to achieve desirable pain relief. The therapy adjustment may be executed continuously, periodically at specific time, duration, or frequency, or in a commanded mode upon receiving from a system user a command or confirmation of parameter adjustment. In an example, if the pain score exceeds the pain threshold (or falls within a specific range indicating an elevated pain), then the first electrostimulation may be delivered. Conversely, if the composite pain score falls below a respective threshold value (or falls within a specific range indicating no pain or mild pain), then a second pain therapy, such as second electrostimulation may be delivered. The first and second electrostimulations may differ in at least one of the stimulation energy, pulse amplitude, pulse width, stimulation frequency, duration, on-off cycle, pulse shape or waveform, electrostimulation pattern such as electrode configuration or energy fractionalization among active electrodes, among other stimulation parameters. In some examples, the therapy adjustment may include selecting a set of electrodes, based on the pain scores, from a plurality of candidate electrodes disposed along the length of an implantable lead. The electrodes may be manually selected by a system user, or automatically selected based on a comparison of the pain scores associated with pain-relief electrostimulation delivered via the respective candidate electrodes. Examples of electrode selection method based on the pain scores are discussed below, such as with reference toFIG.7. The method600may proceed at610to sense functional signals in response to the therapy delivered at644. In some examples, the responses of the signal metrics to pain therapy delivered at644may be used to gauge composite pain score computation such as by adjusting the weight factors. In an example, weight factors may be determined and adjusted via the weight generator322as illustrated inFIG.3, to be proportional to signal metric's sensitivity to pain. FIG.7illustrates, by way of example and not limitation, a method700for selecting one or more active electrodes for delivering electrostimulation for pain therapy. The electrode selection may be based on pain scores generated based on physiological signals sensed during a pain episode. The method700may be implemented in a medical system, such as the pain management system200or300. In an example, at least a portion of the method700may be executed by a neuromodulator device (IND) such as the implantable neuromodulator310. In an example, at least a portion of the method700may be executed by the external system320that are communicatively coupled to the IND. The external system320may include an external programmer, a wearable device, or a remote server-based patient management system, among others. The method700begins at710, where one or more pain episodes may be monitored or induced. Spontaneous pain episodes that occur in an ambulatory setting in patient daily life may be monitored at712such as using one or more physiological sensors. Additionally or alternatively, one or more pain episodes may be induced at712. Pain induction may be performed in a clinic and administered by a medical professional. Examples of the pain induction procedure may include applying heat, pressure, or other artificial stimuli during quantitative sensory testing, administering nerve block or adjusting pharmaceutical agents, temporarily withholding pain-relief therapy or varying therapy dosage to achieve intermediate levels of pain reduction effect, psychological or stress stimulation, or physical exercise such as strenuous leg lift or grip test, among others. At720, a pain assessment session may be initiated during spontaneous or induced pain, either automatically triggered by a sensor indicator or activated manually by the patient (such as during a spontaneous pain episode) or a clinician (such as during a induced pain episode). The pain assessment session may include delivering electrostimulation energy according to a pre-determined stimulation protocol, and evaluating the electrostimulation's pain-relief effect. The pre-determined stimulation protocol may include a plurality of electrode configurations arranged in a specified order. Each electrode configuration includes an anode and a cathode, each selected from a plurality of candidate electrodes (such as electrodes E1-E16inFIG.4A) and a reference electrode (such as the device can housing411inFIG.4A). At730, at least one physiological signal indicative of patient brain activity may be sensed during the pain assessment session, such as using the pain management system200or300. In an example, an EEG signal may be recorded during the pain-relief electrostimulation according to each of the electrode configurations in the pre-determined stimulation protocol. EEG parameters may be extracted from the sensed EEG signal such as using the signal metrics generator221. Metric-specific pain scores corresponding to pain-relief electrostimulation applied according to an electrode configuration involving various candidate electrodes may be determined, such as illustrated inFIG.4B. A composite pain score is computed using a combination of the metric-specific pain scores corresponding to the EEG parameters evaluated. In an example, the composite pain score may be computed as a sum or weighted sum of the metric-specific pain scores. Composite pain scores may similarly computed for pain-relief electrostimulation according to other electrode configurations. The composite pain scores associated with the electrode configurations included in the stimulation protocol may be presented to the patient or a clinician. At740, the composite pain scores associated with the electrode configurations included in the stimulation protocol may be compared to each other. At750, one or more active electrodes may then be selected based on the comparison. The selected one or more active electrodes correspond to respective pain scores less than pain scores associated with other candidate electrodes different from the selected one or more active electrodes. In an example, one or more candidate electrodes that correspond to the lowest composite pain score may be selected, indicating the highest effectiveness in pain reduction. Alternatively, a metric-specific pain reduction score may be determined for each signal metric at730, where a pain reduction score of “1” indicates desirable pain reduction effect, and a pain reduction score of “0” indicates undesirable pain reduction effect. A composite pain reduction score may be computed using a combination of the metric-specific pain reduction scores. Composite pain reduction scores associated with the electrode configurations included in the stimulation protocol may be compared to each other at740. One or more electrodes that correspond to the highest composite pain reduction score among the candidate electrodes indicate the highest effectiveness in pain reduction, and may be selected at750as active electrodes for delivering electrostimulation energy. The method700for selecting active electrodes based on pain scores may be modified for selecting, or determining values of, one or more other therapy parameters, including: electrode energy fractionalization which defines amount of current, voltage, or energy assigned to each active electrode and thereby determines spatial distribution of the modulation field; temporal modulation parameters such as pulse amplitude, pulse width, pulse rate, or burst intensity; morphological modulation parameters respectively defining one or more portions of stimulation waveform morphology such as amplitude of different phases or pulses included in a stimulation burst, among others. The disclosed method may also be used in selecting one or more active therapy regimes from a plurality of candidate therapy regimes each involving a combination of multiple therapy parameters such as electrode selection, energy fractionalization, waveform temporal and morphological parameters. For example, in an automated closed-loop pain therapy or clinician programmed pain therapy, a particular value for a specific therapy parameter, or a particular therapy regime, may be selected and programmed to the IPG411for delivering electrostimulation therapy to relieve patient pain. FIG.8illustrates generally a block diagram of an example machine800upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform. Portions of this description may apply to the computing framework of various portions of the LCP device, the IND, or the external programmer. In alternative embodiments, the machine800may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine800may operate in the capacity of a server machine, a client machine, or both in server-client network environments. In an example, the machine800may act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment. The machine800may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations. Examples, as described herein, may include, or may operate by, logic or a number of components, or mechanisms. Circuit sets are a collection of circuits implemented in tangible entities that include hardware (e.g., simple circuits, gates, logic, etc.). Circuit set membership may be flexible over time and underlying hardware variability. Circuit sets include members that may, alone or in combination, perform specified operations when operating. In an example, hardware of the circuit set may be immutably designed to carry out a specific operation (e.g., hardwired). In an example, the hardware of the circuit set may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation. In connecting the physical components, the underlying electrical properties of a hardware constituent are changed, for example, from an insulator to a conductor or vice versa. The instructions enable embedded hardware (e.g., the execution units or a loading mechanism) to create members of the circuit set in hardware via the variable connections to carry out portions of the specific operation when in operation. Accordingly, the computer readable medium is communicatively coupled to the other components of the circuit set member when the device is operating. In an example, any of the physical components may be used in more than one member of more than one circuit set. For example, under operation, execution units may be used in a first circuit of a first circuit set at one point in time and reused by a second circuit in the first circuit set, or by a third circuit in a second circuit set at a different time. Machine (e.g., computer system)800may include a hardware processor802(e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory804and a static memory806, some or all of which may communicate with each other via an interlink (e.g., bus)808. The machine800may further include a display unit810(e.g., a raster display, vector display, holographic display, etc.), an alphanumeric input device812(e.g., a keyboard), and a user interface (UI) navigation device814(e.g., a mouse). In an example, the display unit810, input device812and UI navigation device814may be a touch screen display. The machine800may additionally include a storage device (e.g., drive unit)816, a signal generation device818(e.g., a speaker), a network interface device820, and one or more sensors821, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine800may include an output controller828, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.). The storage device816may include a machine readable medium822on which is stored one or more sets of data structures or instructions824(e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions824may also reside, completely or at least partially, within the main memory804, within static memory806, or within the hardware processor802during execution thereof by the machine800. In an example, one or any combination of the hardware processor802, the main memory804, the static memory806, or the storage device816may constitute machine readable media. While the machine readable medium822is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions824. The term “machine readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine800and that cause the machine800to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine readable medium examples may include solid-state memories, and optical and magnetic media. In an example, a massed machine readable medium comprises a machine readable medium with a plurality of particles having invariant (e.g., rest) mass. Accordingly, massed machine-readable media are not transitory propagating signals. Specific examples of massed machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The instructions824may further be transmitted or received over a communications network826using a transmission medium via the network interface device820utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as WiFi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device820may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network826. In an example, the network interface device820may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine800, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software. Various embodiments are illustrated in the figures above. One or more features from one or more of these embodiments may be combined to form other embodiments. The method examples described herein can be machine or computer-implemented at least in part. Some examples may include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device or system to perform methods as described in the above examples. An implementation of such methods may include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code may include computer readable instructions for performing various methods. The code can form portions of computer program products. Further, the code can be tangibly stored on one or more volatile or non-volatile computer-readable media during execution or at other times. The above detailed description is intended to be illustrative, and not restrictive. The scope of the disclosure should, therefore, be determined with references to the appended claims, along with the full scope of equivalents to which such claims are entitled.
87,624
11857795
While the invention is amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail below. It is to be understood, however, that the intention is not to limit the invention to the particular embodiments described. On the contrary, the invention is intended to cover all modifications, equivalents, and alternatives falling within the scope of the invention as defined by the appended claims. DETAILED DESCRIPTION OF VARIOUS EMBODIMENTS In the following description of the illustrated embodiments, references are made to the accompanying drawings forming a part hereof, and in which are shown by way of illustration, various embodiments by which the invention may be practiced. It is to be understood that other embodiments may be utilized, and structural and functional changes may be made without departing from the scope of the present invention. The discussion and illustrations provided herein are presented in an exemplary format, wherein selected embodiments are described and illustrated to present the various aspects of the present invention. Systems, devices, or methods according to the present invention may include one or more of the features, structures, methods, or combinations thereof described herein. For example, a device or system may be implemented to include one or more of the advantageous features and/or processes described below. A device or system according to the present invention may be implemented to include multiple features and/or aspects illustrated and/or discussed in separate examples and/or illustrations. It is intended that such a device or system need not include all of the features described herein, but may be implemented to include selected features that provide for useful structures, systems, and/or functionality. In multi-electrode pacing systems, multiple pacing electrodes may be disposed in a single heart chamber, in multiple heart chambers, and/or elsewhere in a patient's body. Electrodes used for delivery of pacing pulses may include one or more cathode electrodes and one or more anode electrodes. Pacing pulses are delivered via the cathode/anode electrode combinations, where the term “electrode combination” denotes that at least one cathode electrode and at least one anode electrode are used. An electrode combination may involve more than two electrodes, such as when multiple electrodes that are electrically connected are used as the anode and/or multiple electrodes that are electrically connected are used as the cathode. Typically, pacing energy is delivered to the heart tissue via the cathode electrode(s) at one or more pacing sites, with a return path provided via the anode electrode(s). If capture occurs, the energy injected at the cathode electrode site creates a propagating wavefront of depolarization which may combine with other depolarization wavefronts to trigger a contraction of the cardiac muscle. The cathode and anode electrode combination that delivers the pacing energy defines the pacing vector used for pacing. The position of the cathode relative to cardiac tissue can be used to define an electrode combination and/or a pacing site. Pacing pulses may be applied through multiple electrodes (i.e., pacing vectors defined by various electrode combinations) in a single cardiac chamber in a timed sequence during the cardiac cycle to improve contractility and enhance the pumping action of the heart chamber. It is desirable for each pacing pulse delivered via the multiple electrode combinations to capture the cardiac tissue proximate the cathode electrode. The pacing energy required to capture the heart is dependent on the electrode combination used for pacing, and different electrode combinations can have different energy requirements for capture. Particularly in the left ventricle, the minimum energy required for capture, denoted the capture threshold, may be highly dependent on the particular electrode combination used. Pacing characteristics of therapy delivery using each electrode combination of a plurality of possible electrode combinations are dependent on many factors, including the distance between the electrodes, proximity to target tissue, type of tissue contacting and between the electrodes, impedance between the electrodes, resistance between the electrodes, and electrode type, among other factors. Such factors can influence the capture threshold for the electrode combination, among other parameters. Pacing characteristics can vary with physiologic changes, electrode migration, physical activity level, body fluid chemistry, hydration, and disease state, among others. Therefore, the pacing characteristics for each electrode combination are unique, and some electrode combinations may work better than others for delivering a particular therapy that improves cardiac function consistent with a prescribed therapy. In this way, electrode combination selection should take into consideration at least the efficacy of one or more electrode combinations of a plurality of electrodes in supporting cardiac function in accordance with a prescribed therapy. The efficacy of one or more electrode combinations of a plurality of electrodes in supporting cardiac function in accordance with a prescribed therapy can be evaluated by consideration of one or more parameters produced by electrical stimulation, such on capture threshold. Electrical stimulation delivered to one body structure to produce a desired therapeutic activation may undesirably cause activation of another body structure. For example, electrical cardiac pacing therapy can inadvertently stimulate bodily tissue, including nerves and muscles. Stimulation of extra-cardiac tissue, including phrenic nerves, the diaphragm, and skeletal muscles, can cause patient discomfort and interfere with bodily function. A patient's evoked response from an electrical cardiac therapy can be unpredictable between electrode combinations. For example, an electrical cardiac therapy delivered using one electrode combination may produce an undesirable activation while an identical electrical cardiac therapy delivered using another electrode combination may not produce the undesirable activation. As such, selecting an appropriate electrode combination, such as one electrode combination of a plurality of electrode combinations made possible by a multi-electrode lead that effects the desired cardiac response with the least amount of energy consumption and that does not unintentionally stimulate tissue, can be many-factored and complicated. Manually testing each parameter of interest for each possible cathode-anode electrode combination can be a time consuming process for doctors, clinicians, and programmers. Furthermore, it can be difficult to sort through numerous different parameters for multiple pacing electrode combinations and understand the various tissue activation responses of electrical therapy delivered using various electrode combinations. Systems and methods of the present invention can simplify these and other process. Devices of the present invention may facilitate selection of one or more electrode combinations using various parameters of interest. A device may be preset for parameters of interest and/or a physician may select beneficial parameters of interest and/or non-beneficial parameters of interest. The parameters that are of interest can vary between patients, depending on the patient's pathology. Beneficial parameters are parameters which are associated with supported cardiac function in accordance with a prescribed therapy and/or are the intended result of a prescribed therapy. Non-beneficial parameters are parameters which are not associated with supported cardiac function in accordance with a prescribed therapy and/or are not the intended result of a prescribed therapy. The flowchart ofFIG.1illustrates a process for selecting one or more electrode combinations and delivering a therapy using the one or more selected electrode combinations. Although this method selects an electrode combination and delivers a therapy using the electrode combination, not all embodiments of the current invention perform all of the steps110-150. Parameters that support cardiac function are evaluated110for a plurality of electrode combinations. A parameter that supports cardiac function is any parameter that is indicative of a physiological effect consistent with one or more therapies prescribed for the patient. For example, successful capture of a heart chamber can be indicative of cardiac contractions that are capable of pumping blood, where ventricular pacing was a prescribed therapy for the patient. Parameters that support cardiac function consistent with a prescribed therapy can be beneficial parameters, as they can be indicative of intended therapy effects (e.g., capture). In some embodiments of the current invention, evaluating a parameter that supports cardiac function includes detecting whether electrical therapy delivered through each electrode combination of a plurality of electrode combinations improves the patient's cardiac function, consistent with a prescribed therapy, relative to cardiac function without the electrical therapy delivered using the respective electrode combination. Parameters that do not support cardiac function are evaluated120for at least some of the plurality of electrode combinations. A parameter that does not support cardiac function is any parameter that produces a physiological effect inconsistent with the patient's prescribed therapy. In some embodiments of the present invention, parameters that do not support cardiac function include parameters that are indicative of undesirable stimulation, the undesirable stimulation not consistent with a therapy prescribed for the patient. For example, delivering an electrical cardiac therapy using a particular electrode combination may unintentionally stimulate skeletal muscles, causing discomfort to the patient, not improving cardiac function consistent with a prescribed therapy, and possibly interfering with improving cardiac function and/or delivery of the prescribed therapy. Parameters that do not support cardiac function consistent with a prescribed therapy can be non-beneficial parameters, as they can be indicative of unintended effects of the therapy. The electrode combinations can be ordered130. The order can be based on the evaluations120and130of the parameters that support cardiac function and the parameters that do not support cardiac function. Ordering may be establishing or recognizing relationships between various electrode combinations based on parameters. Ordering can be performed manually or automatically. For example, a clinician can consider the parameters that support cardiac function and the parameters that do not support cardiac function and order the electrode combinations based on the parameters. Ordering can also be performed algorithmically by a processor executing instructions stored in memory, the processor ordering the electrode combinations based on parameter information stored in memory. For example, a data processor may algorithmically order a plurality of electrode combinations based on parameter information stored in memory, giving priority in the order to electrode combinations that can best implement the prescribed therapy while minimizing the occurrence of undesirable events inconsistent with the prescribed therapy. One or more electrode combinations can be selected140based on the order of the electrode combinations. Selection of one or more electrode combinations may be done manually by a clinician reviewing the electrode combination order and inputting a selection into the device. Selection may also be done automatically, such as by a processor executing instructions stored in memory, the processor algorithmically selecting the electrode combination based on electrode combination order information stored in memory. After electrode combination selection, therapy can be delivered150using the one or more selected electrode combinations. The various steps ofFIG.1, as well as the other steps disclosed herein, can be performed automatically, such that no direct human assistance is needed to initiate or perform the various discrete steps. FIG.2ais a block diagram of a CRM device200that may incorporate circuitry for selecting an electrode combination in accordance with embodiments of the present invention. The CRM device200includes pacing therapy circuitry230that delivers pacing pulses to a heart. The CRM device200may optionally include defibrillation/cardioversion circuitry235configured to deliver high energy defibrillation or cardioversion stimulation to the heart for terminating dangerous tachyarrhythmias. The pacing pulses are delivered via multiple cardiac electrodes205(electrode combinations) disposed at multiple locations within and/or about a heart, wherein a location can correspond to a pacing site. Certain combinations of the electrodes205may be designated as alternate electrode combinations while other combinations of electrodes205are designated as initial electrode combinations. Two or more electrodes may be disposed within a single heart chamber. The electrodes205are coupled to switch matrix225circuitry used to selectively couple electrodes205of various pacing configurations to electrode combination processor201and/or other components of the CRM device200. The electrode combination processor201is configured to receive information gathered via the cardiac electrodes205and beneficial/non-beneficial parameter sensors210. The electrode combination processor201can perform various functions, including evaluating electrode combination parameters that support cardiac function, evaluating electrode combination parameters that do not support cardiac function, determining an order for the electrode combinations, and selecting one or more electrode combinations based on the order, as well as other processes. The control processor240can use patient status information received from patient status sensors215to schedule or initiate any of the functions disclosed herein, including selecting an electrode combination. Patient status sensors215may include an activity monitor, a posture monitor, a respiration monitor, an oxygen level monitor, and an accelerometer, among others. A CRM device200typically includes a battery power supply (not shown) and communications circuitry250for communicating with an external device programmer260or other patient-external device. Information, such as data, parameter measurements, parameter evaluations, parameter estimates, electrode combination orders, electrode combination selections, and/or program instructions, and the like, can be transferred between the device programmer260and patient management server270, CRM device200and the device programmer260, and/or between the CRM device200and the patient management server270and/or other external system. The electrode combination processor201may be a component of the device programmer260, patient management server270, or other patient external system. The CRM device200also includes a memory245for storing program instructions and/or data, accessed by and through the control processor240. In various configurations, the memory245may be used to store information related to activation thresholds, parameters, orders, measured values, program instructions, and the like. Parameters can be measured by Beneficial/Non-Beneficial Parameter Sensors210. Parameter Sensors210can include the various sensors discussed herein or known in the art, including accelerometers, acoustic sensors, electrical signal sensors, pressure sensors, and the like. FIG.2billustrates external circuitry used in an implantation procedure in accordance with various embodiments of the invention.FIG.2bshows a patient290with multiple leads605-608partially inserted subcutaneously through incision280. Leads605-607extend into the heart291, while lead608does not contact the heart291but occupies an area where one or more non-cardiac tissue contacting electrodes (e.g., can electrode, electrode array, subcutaneous non-intrathoracic electrode, and/or submuscular electrode) could be implanted. Lead605can be a left ventricular lead, lead607can be a right ventricular lead, and lead606can be a right atrial lead. The leads605-607can be positioned in the manner ofFIGS.6and7(and can be the same leads shown during an implantation procedure before the implantable housing601is implanted as depicted inFIG.6). The leads605-607can contain electrodes, such as the electrodes references and described herein. For example, the leads605-607can have the electrodes illustrated inFIGS.6and7, and lead608can have one or more electrodes corresponding to the can681and/or indifferent682electrodes of the embodiment ofFIG.6. The leads605-607can be implanted over the long-term. In some embodiments, leads605-607may just have been implanted before other aspects of the present invention are carried out (e.g., evaluation and selection of electrode combinations). In some embodiments, one of more of leads605-607may have been implanted in a separate surgical procedure long before implementation of aspects of the present invention (e.g., a default pacing configuration was used for pacing using convention methods before aspects of the present invention were carried out). The leads605-608inFIG.2bare coupled to a non-implantable evaluation unit249. Evaluation unit249can contain circuitry configured to carry out operations described herein, including pacing configuration selection. For example, evaluation unit249includes a processor255coupled with a combination processor254, memory256, input257, display258, and communications circuitry259. The evaluation unit249can further include defibrillation/cardioversion circuitry253, pacing circuitry252, and switch matrix251. The switch matrix251is electrically coupled with the electrodes of the leads605-608, such that the combination processor254, pacing circuitry252, and defibrillation/cardioversion circuitry253can be selectively electrically coupled/decoupled to various electrodes of the leads605-608to facilitate delivery of electrical stimulation and collection of signals (e.g., an ECG signal indicative of cardiac response to electrical stimulation). As discussed herein, energy delivery to the heart291can fail to therapeutically treat the heart in a medically prescribed manner and/or stimulate tissue in a manner not consistent with the prescribed therapy. The evaluation unit249can be used to characterize various electrode combinations and select one or more preferred pacing/defibrillation configurations before implantable circuitry is programmed with the selection, connected to one or more of the leads650-607, and implanted. Such characterization can occur by the evaluation unit249delivering electrical stimulation using the leads605-608, the leads605-608being the same that would be used to deliver electrical therapy from a patient implantable medical device, and then evaluating the sensed physiological response (e.g., cardiac capture with phrenic stimulation). Evaluation unit249can use the pacing circuitry252to deliver electrical energy between various electrodes of the leads605-608(each delivery using a combination of electrodes). Such energy can be in the form of pacing pulses which can capture and therapeutically pace the heart291. Electrical energy253can be similarly delivered to the heart291using the defibrillation/cardioversion circuitry253. Combination processor254can receive electrical cardiac signals (e.g., ECG signals showing cardiac activity) and/or other signals (e.g., respiration sounds) indicative of the patient's290physiological response to electrical stimulation delivered using the pacing circuitry252and/or defibrillation/cardioversion circuitry253. The physiological response signals can be used by the combination processor254to investigate beneficial and non-beneficial parameters as referenced herein and order and rank various electrode combinations. Input257may be used to input instructions, parameter information, limits, selections, and the like. The input257may take the form of keys, buttons, mouse, track-ball, and the like. Display258can also be used to facilitate clinician interaction with the evaluation unit249. Display258can take the form of a dial, LCD, or cathode-ray tube, among others. In some embodiments, the input257maybe integrated with the display258, such as by use of a touch sensitive display. In some embodiments a doctor can initiate an algorithm that selects an optimal pacing configuration using the input258. The doctor may input various criteria using the input257, the criteria being used to prioritize various parameters and order electrode combinations, for example. In some cases, a doctor could indicate that phrenic stimulation avoidance is to be prioritized, such that only those electrode combinations that do not cause phrenic stimulation based on an evaluation will be selected and/or ranked for subsequent use in stimulation therapy delivery. A doctor could indicate a maximum and/or minimum pulse duration range, such that electrode combinations that cannot capture cardiac tissue using pulse parameters within that range will not be selected and/or ranked. In this way, the evaluation unit249can enhance use of a patient implantable medical device. Because the evaluation unit249can be attached to the same leads as the patient implantable medical device, the evaluation unit249can run various tests that are reflective of actual operating conditions of a patient implantable medical device. Moreover, using the evaluation unit249to perform various tests and perform other functions discussed herein provides several distinct advantages. For example, if a patient implantable medical device is used to perform pacing configuration tests, then the patient implantable medical device must devote resources to perform these tests. These resources include battery life and memory space. An evaluation unit249as described herein or similar device employing aspects of the present invention (e.g., a pacing system analyzer) have much less concern with minimizing power consumption and memory content as compared to an implantable medical device. Moreover, having the evaluation unit249configured to perform pacing configuration tests, instead of the patient implantable medical device, simplifies the circuitry and design of the patient implantable medical device, which can then be more focused on arrhythmia detection and therapy delivery (e.g., an evaluation unit249can employ an acoustic sensor useful for detecting phrenic stimulation, which would consume extra energy, space, and memory if on a patient implantable medical device). Other benefits include enhanced functionality and flexibility. For example, patient implantable medical devices are not commonly provided with interfaces, but the evaluation unit249has an integrated input257and display258. An evaluation unit249can be programmed with information regarding a plurality of different types of patient implantable medical devices (e.g., pacemakers). This allows the evaluation unit249to customize a pacing configuration for a particular type of patient implantable medical device. For example, if the model number of a particular type of available pacemaker is input into the evaluation unit249, the evaluation unit249can then recognize the pacing parameters that the particular type of available pacemaker is capable of outputting (e.g., maximum and minimum pulse amplitude, duration, and the maximum number of electrodes that can be used to form a vector) and customize a pacing configuration (e.g., selection and/or ranking of electrode combinations) for the particular type of available pacemaker to use. In this way, the evaluation unit249may select one pacing configuration for a first type of pacemaker and a different pacing configuration for a second type of pacemaker which would use the same set of electrodes if implanted (e.g., the first pacemaker may be capable of delivering longer pulses as compared to the second, and longer pulses may be preferred for the particular physiology of the patient to optimize pacing, such that a different pacing configuration is preferred depending on which pacemaker is available). Likewise, an evaluation unit249programmed with parameters for multiple patient implantable medical devices may be used to select a particular type of implantable medical device for connection with leads and implantation based on an analysis of the electrode combinations of the leads and the capabilities of available implantable medical devices. In this way, the evaluation unit249may select a first type of pacemaker to be implanted over a second type of pacemaker because an analysis of the leads as referenced herein reveals an optimal pacing configuration (e.g., particular pulse parameters that, when delivered though a particular electrode combination, capture the heart with relatively low energy consumption while not causing undesirable stimulation) that can only be met by one or a few different pacing devices. Therefore, evaluation unit249can automatically make selections of devices and corresponding preferred electrode combinations in the time critical period when a patient is undergoing implantation to provide an optimal pacing configuration. Because the evaluation unit249performs the tests using the electrodes that will be used for therapy, the evaluation unit249can make selections based on more accurate information relative to selections made before leads are implanted. An evaluation unit249can further benefit therapy by evaluating a patient's physiological response to electrical stimulation using parameters and/or sensors that are not provided on a particular implantable medical device. For example, an evaluation unit249can be equipped with a catheter261, one end of the catheter261being inserted through the incision280. Multiple sensors can be provided on the catheter261, such as an acoustic sensor, an EMG sensor, a blood oxygen saturation sensor, and/or accelerometer, among others referenced herein. These sensors can be used with the methods referenced herein for selection of a pacing configuration. For example, an acoustic sensor can sense respiration sounds and thereby detect activation of the diaphragm, an EMG sensor can detect muscle activity signatures indicative of extra-cardiac stimulation, and a blood oxygen saturation sensor can be used to assess the success of a pacing therapy delivered using a particular electrode combination in improving cardiac function (e.g., higher blood oxygen saturation indicative of improved hemodynamic function). Each of these parameters can be used to assess parameters of a particular pacing configuration. Provision of the sensors by the evaluation unit249(and not, for example, by a patient implantable medical device) can conserve implantable device resources (battery life, memory space, physical space, and well as simplify device design and circuitry) and can allow the sensors to evaluate parameters from areas that might not be convenient for a patient implantable medical device to measure. Furthermore, in some embodiments the evaluation unit249can evaluate various electrode combinations and determine that an electrode is malfunctioning or improperly positioned. For example, relatively high impedance measurements taken between two electrodes (e.g., compared to previous measurements or population data) can determine that an electrode is improperly positioned, which can compromise the ability to use an electrode combination that would otherwise be ideal for delivering therapy. Because the evaluation unit249can determine electrode malfunction or mispositioning before a pacemaker is implanted and incision280is still open, one or more leads can be replaced or repositioned and revaluated to provide a better arrangement. Methods and devices for facilitating identification of electrode malfunction can be found in U.S. Patent Publication No. 20070293903, filed on Jun. 16, 2006, which is herein incorporated in its entirety. Communications circuitry259can facilitate the transmission of selections, orders, and rankings pertaining to electrode combinations, among other things, to an external programmer (e.g.300) and/or directly to a patient implantable medical device that can deliver a therapy using the selections, orders, and/or rankings. The circuitry represented inFIGS.2aand2bcan be used to perform the various methodologies and techniques discussed herein. Memory can be a computer readable medium encoded with a computer program, software, firmware, computer executable instructions, instructions capable of being executed by a computer, etc. to be executed by circuitry, such as control processor. For example, memory can be a computer readable medium storing a computer program, execution of the computer program by control processor causing delivery of pacing pulses directed by the pacing therapy circuitry, reception of one or more signals from sensors and/or signal processor to identify, and establish relationships between, beneficial and non-beneficial parameters (e.g., capture and phrenic stimulation thresholds) in accordance with embodiments of the invention according to the various methods and techniques made known or referenced by the present disclosure. In similar ways, the other methods and techniques discussed herein can be performed using the circuitry represented inFIGS.2aand/or2b. FIG.3illustrates a patient external device300that provides a user interface configured to allow a human analyst, such as a physician, or patient, to interact with an implanted medical device. The patient external device300is described as a CRM programmer, although the methods of the invention are operable on other types of devices as well, such as portable telephonic devices, computers or patient information servers used in conjunction with a remote system, for example. The programmer300includes a programming head310which is placed over a patient's body near the implant site of an implanted device to establish a telemetry link between a CRM and the programmer300. The telemetry link allows the data collected by the implantable device to be downloaded to the programmer300. The downloaded data is stored in the programmer memory365. The programmer300includes a graphics display screen320, e.g., LCD display screen, that is capable of displaying graphics, alphanumeric symbols, and/or other information. For example, the programmer300may graphically display one or more of the parameters downloaded from the CRM on the screen320. The display screen320may include touch-sensitive capability so that the user can input information or commands by touching the display screen320with a stylus330or the user's finger. Alternatively, or additionally, the user may input information or commands via a keyboard340or mouse350. The programmer300includes a data processor360including software and/or hardware for performing the methods disclosed here, using program instructions stored in the memory365of the programmer300. In one implementation, sensed data is received from a CRM via communications circuitry366of the programmer300and stored in memory365. The data processor360evaluates the sensed data, which can include information related to beneficial and non-beneficial parameters. The data processor360can also perform other method steps discussed herein, including comparing parameters and ordering the electrode combinations, among others. Parameter information, electrode combination information, and an electrode combination order, as well as other information, may be presented to a user via a display screen320. The parameters used for ordering the electrode combinations may be identified by the user or may be identified by the data processor360, for example. In some embodiments of the current invention, ordering the electrode combinations may be determined by a user and entered via the keyboard320, the mouse350, or stylus330for touch sensitive display applications. In some embodiments of the current invention, the data processor360executes program instructions stored in memory to order a plurality of electrode combinations based on sensed beneficial and non-beneficial parameters. The electrode combination order determined by the data processor360is then displayed on the display screen, where a human analyst then reviews the order and selects one or more electrode combinations for delivering an electrical cardiac therapy. The flowchart ofFIG.4aillustrates a process400for selecting one or more electrode combinations based on capture threshold and phrenic nerve activation parameters and automatically updating the electrode combination selection. The process400includes measuring or estimating410a capture threshold and phrenic nerve activation threshold for each electrode combination during an implantation procedure using a set of at least partially implanted electrodes. The capture threshold for a particular electrode combination may be determined by a capture threshold test. For example, the capture threshold test may step down the pacing energy for successive pacing cycles until loss of capture is detected. The process400ofFIG.4aincludes measuring or estimating410a phrenic nerve activation threshold for each electrode combination. The phrenic nerve innervates the diaphragm, so stimulation of the phrenic nerve can cause a patient to experience a hiccup. Electrical stimulation that causes a hiccup can be uncomfortable for the patient, and can interfere with breathing. Additionally, phrenic nerve stimulation and/or diaphragmatic stimulation that is inconsistent with the patient's therapy and/or does not support cardiac function is undesirable and can interfere with the intended therapy. Phrenic nerve activation and/or a phrenic nerve activation threshold may be measured for an electrode combination by delivering electrical energy across the electrode combination and sensing for phrenic nerve activation. The energy delivered could also be used to simultaneously perform other tests, such as searching for a capture threshold. If no phrenic nerve activation is sensed using the level of electrical energy delivered, the energy level can be iteratively increased for subsequent trials of delivering electrical energy and monitoring for phrenic nerve activation until phrenic nerve activation is sensed. The electrical energy level at which phrenic nerve activation is detected can be the phrenic nerve activation threshold. Alternatively, the level of electrical energy may be decreased or otherwise adjusted until phrenic nerve activation is not detected. Methods for evaluating phrenic nerve activation are disclosed in U.S. Pat. No. 6,772,008, Provisional Patent Application No. 61/065,743 filed Feb. 14, 2008, and Patent Publication No. 20060241711, each of which are herein incorporated by reference in their respective entireties. The process400ofFIG.4afurther includes comparing420the capture threshold and phrenic nerve activation threshold of one electrode combination to at least one other electrode combination. Comparing can be performed in various ways, including by a human, such as a doctor or programmer, or automatically by a processor executing instructions stored in memory. In some embodiments of the present invention, some aspects of comparing420can be done by a human while some aspects of comparing420can be done electronically. Comparing420can include comparing the capture thresholds of the electrode combinations to one another. Such a comparison can identify which electrode combinations are associated with the lowest capture thresholds. Comparing420can also include comparing the occurrence, amounts, and/or thresholds of phrenic nerve activation of the electrode combinations to one another. Such a comparison can identity which electrode combinations are associated with the highest and/or lowest occurrence, amount and/or threshold of phrenic nerve stimulation. Other parameters discussed herein can also be similarly compared in this and other embodiments of the present invention. Comparing420can be multidimensional, such that multiple metrics are compared for multiple electrode combinations. For example, comparing420may consider capture threshold and phrenic nerve activation for multiple electrode combinations to indicate which electrode combination has the lowest relative capture threshold and the least relative phrenic nerve activation. In various embodiments, comparing parameters can include graphically displaying data in the form of tables and/or plots for physician review. In some embodiments, the physician can make a selection of an electrode combination or rank combinations upon reviewing the data. In some embodiments, a physician can rule out one or more electrode combinations from subsequent automatic selection by a processor based on the review of the data. The process400ofFIG.4afurther includes selecting430an electrode combination based on the comparison of step420. Selecting430may be done entirely by a human, entirely by a system algorithmically, or partially by a human and partially by the system. Selecting430can be done according to criteria. For example, the results of the comparison can be reviewed and the electrode combination(s) matching a predetermined criterion can be selected. The criteria may be predefined by a human. Different sets of criteria may be created by a human, stored in memory, and then selected by a doctor or programmer for use, such as use in selecting430an electrode combination based on the comparison. By way of example, selecting430can include selecting according to the criteria that the selected electrode combination be the combination with the lowest capture threshold that was not associated with phrenic nerve activation. Other criteria that can be used additionally or alternatively include responsiveness to CRT, low energy consumption, extra-cardiac activation, dP/dt, among others indicative of beneficial parameters consistent with a prescribed therapy or non-beneficial parameters inconsistent with the prescribed therapy. The electrode combination fitting such criteria can be identified for selection based on the comparison430. The process400ofFIG.4afurther includes delivering440therapy using the selected electrode combination. Delivering440therapy can include any therapy delivery methods disclosed herein or known in the art. The process400ofFIG.4afurther includes determining whether an electrode combination update is indicated450. An electrode combination update may be indicated in various ways, including detecting a condition necessitating an electrode combination update (such as loss of capture, change in posture, change in disease state, detection of non-therapeutic activation, and/or short or long term change in patient activity state, for example). An electrode combination update may be initiated according to a predetermined schedule, or an indication given by a human or system. In the particular embodiment ofFIG.4a, if it is determined that an electrode combination update is indicated450, then the system automatically updates460the electrode combination selection460. In various embodiments of the current invention, automatically updating460electrode combination selection can include some or all of the various methods of the process400or can be based on other methods. According to various embodiments of the present invention, therapy can then be delivered440using the updated electrode combination. The updated electrode combination can be different from the electrode combination previously used to deliver therapy, or the updated electrode combination can be the same electrode combination, despite the update. Although the embodiment ofFIG.4aexemplified aspects of the present invention using capture threshold as a parameter that supports cardiac function consistent with a prescribed therapy, numerous other parameters can alternatively, or additionally, be used to indicate cardiac function. For example, a parameter that supports cardiac function can include a degree of responsiveness to cardiac resynchronization therapy (CRT). As one of ordinary skill in the art would understand, when attempting CRT, it is preferable to select an electrode combination with a higher degree of responsiveness to CRT relative to other electrode combinations. Responsiveness to CRT, including methods to detect responsiveness, is disclosed in U.S. patent application Ser. No. 11/654,938, filed Jan. 18, 2007, which is hereby incorporated by reference in its entirety. Parameters that support cardiac function consistent with a prescribed therapy may be related to contractility, blood pressure, dP/dt, stroke volume, cardiac output, contraction duration, hemodynamics, ventricular synchronization, activation sequence, depolarization and/or repolarization wave characteristics, intervals, responsiveness to cardiac resynchronization, electrode combination activation timing, stimulation strength/duration relationship, and battery consumption. Various parameters that may be used for electrode combination selection are discussed in U.S. patent application Ser. No. 11/338,935, filed Jan. 25, 2006, and United States Publication No. 20080004667, both of which are hereby incorporated herein by reference in each respective entirety. Each of these incorporated references include parameters that support cardiac function and parameters that do not support cardiac function, the parameters usable in the methods disclosed herein for selecting an electrode combination. Although the embodiment ofFIG.4aexemplified aspects of the present invention using phrenic nerve activation as a parameter that does not support cardiac function consistent with a prescribed therapy, numerous other parameters can alternatively, or additionally, be used. Parameters that do not support cardiac stimulation consistent with a prescribed therapy can include, but are not limited to, extra-cardiac stimulation, non-cardiac muscle stimulation (ex. skeletal muscle stimulation), unintended nerve stimulation, anodal cardiac stimulation, and excessively high or low impedance. For example, a parameter that does not support cardiac function consistent with a prescribed therapy can include skeletal muscle activation, undesirable modes of cardiac activation, and/or undesirable nerve activation. Commonly owned U.S. Pat. No. 6,772,008, which is incorporated herein by reference, describes methods and systems that may be used in relation to detecting undesirable tissue activation. Skeletal muscle activation may be detected, for example, through the use of an accelerometer and/or other circuitry that senses accelerations indicating muscle movements that coincide with the output of the stimulation pulse. Other methods of measuring tissue activation may involve, for example, the use of an electromyogram sensor (EMG), microphone, and/or other sensors. In one implementation, activation of the laryngeal muscles may be automatically detected using a microphone to detect the patient's coughing response to undesirable activation of the laryngeal muscles or nerves due to electrical stimulation. Undesirable nerve or muscle activation may be detected by sensing a parameter that is directly or indirectly responsive to the activation. Undesirable nerve activation, such as activation of the vagus or phrenic nerves, for example, may be directly sensed using electroneurogram (ENG) electrodes and circuitry to measure and/or record nerve spikes and/or action potentials in a nerve. An ENG sensor may comprise a neural cuff and/or other type or neural electrodes located on or near the nerve of interest. For example, systems and methods for direct measurement of nerve activation signals are discussed in U.S. Pat. Nos. 4,573,481 and 5,658,318 which are incorporated herein by reference in their respective entireties. The ENG may comprise a helical neural electrode that wraps around the nerve and is electrically connected to circuitry configured to measure the nerve activity. The neural electrodes and circuitry operate to detect an electrical activation (action potential) of the nerve following application of the electrical stimulation pulse. Tissue activation not consistent with a prescribed therapy can also include anodal stimulation of cardiac tissue. For example, pacing may cause the cardiac tissue to be stimulated at the site of the anode electrode instead of the cathode electrode pacing site as expected. Cardiac signals sensed following the pacing pulse are analyzed to determine if a pacing pulse captured the cardiac tissue. Capture via anodal activation may result in erroneous detection of capture, loss of capture, unintended cardiac activation, and/or unpredictable wave propagation. Some electrode combinations maybe more susceptible to anodal stimulation than other electrode combinations. As such, the occurrence of anodal stimulation is a non-beneficial parameter that does not support cardiac function and/or is not consistent with the patient's therapy. An exemplary list of beneficial and/or non-beneficial parameters that may be sensed via the parameter sensors includes impedance, contraction duration, ventricular synchronization, activation sequence, depolarization and/or repolarization wave characteristics, intervals, responsiveness to cardiac resynchronization, electrode combination activation timing, extra-cardiac stimulation, non-cardiac muscle stimulation (ex. skeletal muscle stimulation), nerve stimulation, anodal cardiac stimulation, contractility, blood pressure, dP/dt, stroke volume, cardiac output, contraction duration, hemodynamics, ventricular synchronization, activation sequence, depolarization and/or repolarization wave characteristics, intervals, responsiveness to cardiac resynchronization, electrode combination activation timing, stimulation strength/duration relationship, among others. One or more of these sensed parameters can be used in conjunction with the methods discussed herein to select an electrode combination. FIG.4billustrates a method401, the method401comprising implanting471a plurality of cardiac electrodes supported by one or more leads in a patient. The leads are then attached472to a patient external analyzer circuit. The patient external analyzer circuit could be a type of pacing system analyzer (e.g., evaluation unit249). Once attached, electrical stimulation is delivered473using the plurality of cardiac electrodes and the analyzer circuit. The method401can further include evaluating474, for each electrode combination of a plurality of electrode combinations of the plurality of implanted cardiac electrodes, one or more first parameters and one or more second parameters produced by the electrical stimulation delivered using the electrode combination, the first parameters supportive of cardiac function consistent with a prescribed therapy and the second parameters not supportive of cardiac function consistent with the prescribed therapy. The evaluation can include a comparison between respective electrode combinations of parameters (e.g., first parameters) and non-beneficial parameters (e.g., second parameters) associated with each combination. One or more electrode combinations of the plurality of cardiac electrodes can be selected475. The selection475can be based on the evaluation474. For example, the one or more electrode combinations selected could be selected as being associated with the one or more first parameters and less associated with the one or more second parameters for the one or more electrode combinations relative to other electrode combinations of the plurality of cardiac electrodes. Evaluation474and selection475can be performed in accordance in the various embodiments referenced herein. An implantable pacing circuit can be programmed476to deliver a cardiac pacing therapy that preferentially uses the selected one or more electrode combinations relative to other electrode combinations of the plurality of cardiac electrodes. The steps of evaluating474, selecting475, and programming476can be performed automatically by circuitry, such as the patient external analyzer circuit. Before, during, and/or after programming476, the one or more leads can be detached477from the analyzer circuit and then attached478to the implantable pacing circuit. The implantable pacing circuit can be implanted479. After implantation479, cardiac pacing therapy can be delivered480using the implantable pacing circuit preferentially using the selected one or more electrode combinations relative to other electrode combinations of the plurality of cardiac electrodes in which ever manner the implantable pacing circuit is programmed. In some embodiments, evaluating474the first parameters comprises evaluating a capture threshold for each of the plurality of electrode combinations, evaluating474the second parameters comprises evaluating extra-cardiac stimulation, and selecting475the one or more electrode combinations comprises selecting at least one electrode combination of the plurality of electrode combinations with the lowest capture threshold that does not cause extra-cardiac stimulation. The method401may include determining an electrode combination ranking, the ranking having higher ranked one or more electrode combinations that are associated with the one or more first parameters being supportive of cardiac function consistent with a prescribed therapy and are less associated with the one or more second parameters not supportive of cardiac function consistent with the prescribed therapy for the one or more electrode combinations relative to lower ranked electrode combinations of the plurality of cardiac electrodes. Higher ranked electrode combinations can be used first and/or more relative to other electrode combinations by a therapy delivery device having the capability of automatically switching pacing configurations. The method401may include receiving input instructions, wherein selecting the one or more electrode combinations of the plurality of cardiac electrodes is further based on the input instructions. The input instructions may be input by a doctor or other health professional, for example. The ability to input such instructions can enhance the flexibility of a pacing system, as discussed herein. The input instructions may pertain to various different commands and/or parameters. For example, the input instructions may indicate the one or more first parameters and the one or more second parameters from a plurality of different parameters upon which the selection475of the one or more electrode combinations is based. The input instructions may indicate one or more of a maximum pulse amplitude at which the implantable pacing circuit is programmed476to deliver, a minimum pulse amplitude at which the implantable pacing circuit is476programmed to deliver, a maximum pulse width at which the implantable pacing circuit is programmed476to deliver, a minimum pulse amplitude at which the implantable pacing circuit is476programmed to deliver, and which electrode combinations of the plurality of electrodes will be used to deliver480electrical stimulation and be evaluated. The input instructions may indicate one of more electrode combinations for which the first parameter is to be directly measured based on the delivery476of the electrical stimulation and one or more electrode combinations for which the first parameter is to be estimated and not directly measured. In some embodiments, there are at least two stages for a physician to interact with an evaluation unit and input instructions. For example, one stage for input is before the delivery473of the electrical stimulation. Such input might concern parameters for testing, such as how many electrode combinations will be tested, what therapy are the electrode combinations being evaluated/selected for (e.g., bi-ventricular pacing), how the selection algorithm is to be run (e.g., with extra weight given for certain parameters for which a patient is particularly susceptible, such as phrenic stimulation in a patient with emphysema), what parameters are to be evaluated, and/or how many electrode combinations are to be selected, among other options disclosed herein. Another stage for input is after the selection475algorithm has been run. In this stage the physician may review the selection, order, and/or ranking of electrode combinations, and provide an approval or rejection. If approved, the selection/order/ranking can be used to program476the implantable pacing circuit. If rejected, testing (e.g., steps473-475) can be redone with different input parameters regarding how the steps are performed (e.g., a change made to any of the inputs discussed in the paragraph above). This stage many also provide an opportunity for a physician to modify the selection/order/ranking (e.g., selecting a different electrode or combination or rearranging the order) with which the implantable pacing circuit is to be programmed476. In some embodiments, a physician is given the option of whether a system of the present invention will automatically accept a selection/order/ranking of electrode combinations and program an implantable medical device with the selection/order/ranking or give the physician the opportunity to review, approve, and/or modify the selection/order/ranking before programming476. Auto-acceptance before programming can minimize the critical time during which a patient is undergoing an operation procedure, while requiring physician approval provides enhanced flexibility. In some embodiments, if the delivery473using, and evaluation474of, an electrode combination using a particular electrode provide poor results (e.g., very high capture threshold and/or a low extra-cardiac stimulation threshold), then subsequent testing may automatically refrain from using one or both of the electrodes of that combination for further testing (e.g., steps473-474). In some embodiments, one of the electrodes of a poorly performing first combination may be tested (e.g., steps473-474) with a different electrode in a second combination, and if the second combination has improved performance relative to the first than it may be assumed that the other electrode of the first combination (unused in the second combination) is non-ideal and subsequent testing will not use that electrode. But if the second combination also has poor performance, then the electrode used in the first combination but not the second may be tested in a third combination. This manner of testing can minimize the time needed to select475an appropriate electrode combination during surgery and can minimize the number of tests that could be damaging (e.g., when the capture threshold is particularly high, causing the capture threshold test to deliver several high energy stimuli and/or causing damaging extra-cardiac stimulation). The method401may include comparing respective first and second parameters associated with the electrode combinations between the electrode combinations, determining a ranking for at least some of the electrode combinations of the plurality of electrode combinations, the ranking based on the evaluations474of the first parameters and the second parameters, and switching delivery480of the cardiac pacing therapy from a first prioritized electrode combination of the ranking to a lower prioritized electrode combination of the ranking in response to a detected change in condition. The detected change in condition could be a change in impedance between the first prioritized electrode combination, for example, among the other changes discussed herein. The method401may include identifying a location for implantation of a housing for the implantable pacing circuit, the housing having a housing electrode, and placing a catheter having an electrode at the location, wherein delivering473electrical stimulation using the plurality of cardiac electrodes and the analyzer circuit further comprises delivering electrical stimulation between one or more of the plurality of cardiac electrodes and the catheter electrode, evaluating474further comprises evaluating first and second parameters for each electrode combination using one or more of the plurality of cardiac electrodes and the catheter electrode, and selecting475further comprises selecting one or more electrode combinations of the plurality of cardiac electrodes and the housing electrode based on the evaluation. The flowchart ofFIG.5illustrates how information can be handled and managed according to a process500for selecting one or more electrode combinations. The process500includes an implanted device receiving510user information for electrode combination evaluation. The information used for electrode combination evaluation may be determined by a human. The process500ofFIG.5further includes measuring or estimating520electrode combination parameters identified as beneficial or non-beneficial parameters of interest. Measuring or estimating can be performed according to any method disclosed herein or known in the art. By way of example, the received information may be the parameters of beneficial responsiveness to cardiac resynchronization and non-beneficial arrhythmia induction, among others. The responsiveness to cardiac resynchronization parameter and the arrhythmia induction parameter may then be measured or estimated520for a plurality of electrode combinations. The process500ofFIG.5further includes transmitting530electrode combination parameter information from the pacemaker to a programmer. The process500ofFIG.5further includes displaying540the electrode combination information on the programmer. The programmer can include a LCD screen or other means disclosed herein or known in the art for displaying information. Some or all of the electrode combination information may be displayed. The electrode combination information can be displayed as organized according to a rank, one or more groups, one or more categories, or other information organization scheme. For example, the plurality of electrode combinations could be ranked, the electrode combination associated with the highest relative responsiveness to cardiac resynchronization therapy and the lowest relative occurrence of arrhythmia induction being ranked above electrode combinations with lower relative responsiveness to cardiac resynchronization therapy and higher occurrence of arrhythmia induction. In this way, the electrode combinations can be ranked so as to highlight those electrode combinations associated with the highest relative levels of beneficial parameters and the lowest relative levels of non-beneficial parameters, according to a prescribed therapy. The programmer and/or the implantable device may include a processor and execute instructions stored in memory to algorithmically recommend one or more electrode combinations based on the transmitted electrode combination information. The particular recommended electrode combination or electrode combinations can be displayed by the programmer along with other electrode combinations and associated electrode combination parameter information, or the recommended electrode combination or electrode combinations may be displayed by the programmer with electrode combinations that were not recommended. The programmer may display one or more recommend electrode combinations and non-recommended electrode combinations, and visually highlight the one or more recommended electrode combinations. The programmer may display one or more recommended electrode combinations amongst other electrode combinations, but order the one or more recommended electrode combinations to indicate which electrode combination or combinations are recommended. In addition to recommending an electrode combination and displaying the recommended electrode combination, the programmer may also give reasons why the particular electrode combination or combinations were recommended. Although the particular process500ofFIG.5states that the programmer displays the electrode combination information, other implementations are possible. For example, the electrode combination information may be displayed on a screen or printed from a device remote from the programmer. Inputting550the electrode combination selection may be facilitated by a device displaying the electrode combination information, such as by a user selecting or confirming a displayed recommended electrode combination. Inputting550may be done by any methods disclosed herein or known in the art. In some embodiments of the invention, several electrode combination selections can be input by the user to the programmer. The process500ofFIG.5further includes the programmer560uploading an electrode combination selection to a pacemaker. The pacemaker of step560could be the implanted device of step510. Uploading can be facilitated by the same means used to facilitate the implanted device receiving the user criteria, and/or transmitting the electrode combination parameter information. The therapy device600illustrated inFIG.6employs circuitry capable of implementing the electrode combination selection techniques described herein. The therapy device600includes CRM circuitry enclosed within an implantable housing601. The CRM circuitry is electrically coupled to an intracardiac lead system610. Although on intracardiac lead system610is illustrated inFIG.6, various other types of lead/electrode systems may additionally or alternatively be deployed. For example, the lead/electrode system may comprise and epicardial lead/electrode system including electrodes outside the heart and/or cardiac vasculature, such as a heart sock, an epicardial patch, and/or a subcutaneous system having electrodes implanted below the skin surface but outside the ribcage. Portions of the intracardiac lead system610are inserted into the patient's heart. The lead system610includes cardiac pace/sense electrodes651-656positioned in, on, or about one or more heart chambers for sensing electrical signals from the patient's heart and/or delivering pacing pulses to the heart. The intracardiac sense/pace electrodes651-656, such as those illustrated inFIG.6, may be used to sense and/or pace one or more chambers of the heart, including the left ventricle, the right ventricle, the left atrium and/or the right atrium. The CRM circuitry controls the delivery of electrical stimulation pulses delivered via the electrodes651-656. The electrical stimulation pulses may be used to ensure that the heart beats at a hemodynamically sufficient rate, may be used to improve the synchrony of the heart beats, may be used to increase the strength of the heart beats, and/or may be used for other therapeutic purposes to support cardiac function consistent with a prescribed therapy. The lead system610includes defibrillation electrodes641,642for delivering defibrillation/cardioversion pulses to the heart. The left ventricular lead605incorporates multiple electrodes654a-654dand655positioned at various locations within the coronary venous system proximate the left ventricle. Stimulating the ventricle at multiple locations in the left ventricle or at a single selected location may provide for increased cardiac output in a patients suffering from congestive heart failure (CHF), for example, and/or may provide for other benefits. Electrical stimulation pulses may be delivered via the selected electrodes according to a timing sequence and output configuration that enhances cardiac function. AlthoughFIG.6illustrates multiple left ventricle electrodes, in other configurations, multiple electrodes may alternatively or additionally be provided in one or more of the right atrium, left atrium, and right ventricle. Portions of the housing601of the implantable device600may optionally serve as one or more multiple can681or indifferent682electrodes. The housing601is illustrated as incorporating a header689that may be configured to facilitate removable attachment between one or more leads and the housing601. The housing601of the therapy device600may include one or more can electrodes681. The header689of the therapy device600may include one or more indifferent electrodes682. The can681and/or indifferent682electrodes may be used to deliver pacing and/or defibrillation stimulation to the heart and/or for sensing electrical cardiac signals of the heart. Communications circuitry is disposed within the housing601for facilitating communication between the CRM circuitry and a patient-external device, such as an external programmer or advanced patient management (APM) system. The therapy device600may also include sensors and appropriate circuitry for sensing a patient's metabolic need and adjusting the pacing pulses delivered to the heart and/or updating the electrode combination selection to accommodate the patient's metabolic need. In some implementations, an APM system may be used to perform some of the processes discussed here, including evaluating, estimating, comparing, ordering, selecting, and updating, among others. Methods, structures, and/or techniques described herein, may incorporate various APM related methodologies, including features described in one or more of the following references: U.S. Pat. Nos. 6,221,011; 6,270,457; 6,277,072; 6,280,380; 6,312,378; 6,336,903; 6,358,203; 6,368,284; 6,398,728; and 6,440,066, which are hereby incorporated herein by reference in each of their respective entireties. In certain embodiments, the therapy device600may include circuitry for detecting and treating cardiac tachyarrhythmia via defibrillation therapy and/or anti-tachyarrhythmia pacing (ATP). Configurations providing defibrillation capability may make use of defibrillation coils641,642for delivering high energy pulses to the heart to terminate or mitigate tachyarrhythmia. CRM devices using multiple electrodes, such as illustrated herein, are capable of delivering pacing pulses to multiple sites of the atria and/or ventricles during a cardiac cycle. Certain patients may benefit from activation of parts of a heart chamber, such as a ventricle, at different times in order to distribute the pumping load and/or depolarization sequence to different areas of the ventricle. A multi-electrode pacemaker has the capability of switching the output of pacing pulses between selected electrode combinations within a heart chamber during different cardiac cycles. FIG.7illustrates an enlarged view of the area delineated by the dashed line circle inFIG.6.FIG.7illustrates various pacing configurations754a,754b,754c,754d,754cd, and756that may be used to deliver pacing pulses. Each of the pacing configurations754a,754b,754c,754d,754cd, and756includes a common cathode electrode655. Pacing configuration754ais defined between cathode electrode655and anode electrode654a; pacing configuration754bis defined between cathode electrode655and anode electrode654b; pacing configuration754cis defined between cathode electrode655and anode electrode654c; pacing configuration754dis defined between cathode electrode655and anode electrode654d; pacing configuration756is defined between cathode electrode655and anode electrode656. In some configurations, the pacing configuration cathode, or the pacing configuration anode, or both, may comprise multiple electrodes. For example, pacing configuration754cdincludes cathode electrode655and anode electrodes654cand654d. Each of the pacing configurations discussed above correspond to an electrode combination, and each pacing configuration and electrode combination likewise correspond to a pacing site and/or configuration. Delivering an identical electrical therapy using each electrode combination can elicit a different response from the patient. For example, therapy delivered at one electrode combination may be more likely to capture a chamber than another site. Also, therapy delivered using one electrode combination may be more likely to stimulate the diaphragm than another site. Therefore, it is important to identify the electrode combination through which optimum therapy can be delivered. In some cases, the optimum electrode combination for therapy is one that causes the desired response, using the smallest amount of power (such as battery storage), that does not cause undesirable stimulation. For example, an optimal electrode combination may be an electrode combination through which a delivered therapy captures the intended chamber requiring the smallest amount of voltage and current that does not stimulate the diaphragm or skeletal muscles, or other extra-cardiac tissue. The flowchart ofFIG.8illustrates a process800for estimating parameters, specifically, both beneficial (e.g., capture) and non-beneficial (e.g., undesirable activation) parameters. The process800includes measuring810a capture threshold of an initial electrode combination. The procedure for measuring810a capture threshold for the initial electrode combination can be done according to any capture threshold measuring methods disclosed herein or known in the art. The process800ofFIG.8further includes measuring820the impedance of the initial electrode combination. The impedance of the initial electrode combination may be measured with the capture threshold measurement of the initial electrode combination. Any method for measuring impedance for each electrode combination may be used. One illustrative example of techniques and circuitry for determining the impedance of an electrode combination is described in commonly owned U.S. Pat. No. 6,076,015 which is incorporated herein by reference in its entirety. In accordance with this approach, measurement of impedance involves an electrical stimulation source, such as an exciter. The exciter delivers an electrical excitation signal, such as a strobed sequence of current pulses or other measurement stimuli, to the heart between the electrode combination. In response to the excitation signal provided by an exciter, a response signal, e.g., voltage response value, is sensed by impedance detector circuitry. From the measured voltage response value and the known current value, the impedance of the electrode combination may be calculated. The process800ofFIG.8further includes measuring830the impedance of an alternate electrode combination. The measuring step830could be repeated for a plurality of different alternate electrode combinations. The process800ofFIG.8further includes measuring840an undesirable activation threshold of the initial electrode combination. The procedure for measuring840the undesirable activation threshold of the initial electrode combination may be similar to the procedure for measuring810the capture threshold of the initial electrode combination, and may be done concurrently with the measuring810of the capture threshold of the initial electrode combination. Undesirable activation threshold measuring may be performed by iteratively increasing, decreasing, or in some way changing a voltage, current, duration, and/or some other therapy parameter between a series of test pulses that incrementally increase in energy level. One or more sensors can monitor for undesirable activation immediately after each test pulse is delivered. Using these methods, the point at which a parameter change causes undesirable activation can be identified as an undesirable activation threshold. By way of example and not by way of limitation, the undesirable activation threshold for an electrode combination may be measured by delivering first test pulse using the initial electrode combination. During and/or after each test pulse is delivered, sensors can monitor for undesirable activation. For example, an accelerometer may monitor for contraction of the diaphragm indicating that the test pulse stimulated the phrenic nerve and/or diaphragm muscle. If no phrenic nerve and/or diaphragm muscle activation is detected after delivery of a test pulse, then the test pulse is increased a predetermined amount and another test pulse is delivered. This scanning process of delivering, monitoring, and incrementing is repeated until phrenic nerve and/or diaphragm muscle activation is detected. One or more of the test pulse energy parameters at which the first undesirable activation is detected, such as voltage, can be considered to be the undesirable activation threshold. The process800ofFIG.8further includes estimating850a capture threshold of the alternate electrode combination. Estimating850the capture threshold of the alternate electrode combination can be performed by using the capture threshold and the impedance of the initial electrode combination and the impedance of the alternate electrode combination. Estimation of the capture threshold of the alternate electrode combination in accordance with some embodiments described herein, is based on the assumption that for a given pulse width, the capture threshold voltage for the initial electrode combination and the capture threshold voltage for the alternate electrode combination require an equal amount of current, energy or charge. The relationship between the capture threshold voltage and current for each electrode combination can be defined by Ohm's law as follows: Vth=IthZ,  [1] where Vthis the capture threshold voltage of the electrode combination, Ithis the capture threshold current of the electrode combination, and Z is the impedance of the electrode combination. For the initial electrode combination, the relationship between the capture threshold voltage and current may be expressed as: Vth-in=Ith-inZin[2] where, Vth-inis the capture threshold voltage of the initial electrode combination, Ith-inis the capture threshold current of the initial electrode combination, and Zinis the impedance of the initial electrode combination. For the alternate electrode combination, the relationship between the capture threshold voltage and current may be expressed as: Vth-ex=Ith-exZex[3] where, Vth-exis the capture threshold voltage of the alternate electrode combination, Ith-exis the capture threshold current of the alternate electrode combination, and Zexis the impedance of the alternate electrode combination. As previously stated, in some embodiments, the capture threshold current of two electrode combinations having a common electrode is assumed to be about equal, or, Ith-in=Ith-ex. The relationship between the alternate and initial capture threshold voltages may then be expressed as: Vth⁢-⁢ex=Vth⁢-⁢inZin⁢Zex[4] By the processes outlined above Vth-in, Zin, and, Zexare measured parameters, and the capture threshold voltage may be estimated based on these measured parameters. The accuracy of an estimation calculation of a capture threshold for a particular electrode combination may be increased if the measured electrode combination has the same polarity as the electrode combination for which the capture threshold is being estimated. Methods for parameter estimation, including capture threshold estimation, are disclosed in United States Publication No. 20080046019, herein incorporated by reference in its entirety. The process800ofFIG.8further includes estimating860an undesirable activation threshold of the alternate electrode combination. Estimating860the undesirable activation threshold of the alternate electrode combination can be performed by using the undesirable activation threshold and the impedance of the initial electrode combination and the impedance of the alternate electrode combination. Estimating850the undesirable activation threshold of the alternative electrode combination can be performing using methods similar to estimating a capture threshold, as discussed and referenced herein. Estimating a threshold, such as estimating a capture threshold and/or an undesirable activation threshold, instead of measuring the same, can provide several advantages. For example, in some circumstances, measuring and estimating of some thresholds for a plurality of electrode combinations can be done faster than measuring the threshold for each electrode combination of the plurality of electrode combinations, as one or more test pulses do not need to be delivered for each electrode combination. Additionally, a test pulse can be uncomfortable for a patient to experience, and therefore minimizing the number of test pulses can be preferable. Appropriate selection of the energy parameters and an electrode combination that produce the desired activation that supports cardiac and avoid the undesirable activation, consistent with a prescribed therapy, can involve the use of strength-duration relationships measured or otherwise provided. The selection of an electrode combination may involve evaluating the cardiac response across ranges of one or more of pulse width, pulse amplitude, frequency, duty cycle, pulse geometry, and/or other energy parameters. Capture is produced by pacing pulses having sufficient energy to produce a propagating wavefront of electrical depolarization that results in a contraction of the heart tissue. The energy of the pacing pulse is a product of two energy parameters—the amplitude of the pacing pulse and the duration of the pulse. Thus, the capture threshold voltage over a range of pulse widths may be expressed in a strength-duration plot910as illustrated inFIG.9. Undesirable activation by a pacing pulse is also dependent on the pulse energy. The strength-duration plot920for undesirable activation may have a different characteristic from the capture strength-duration and may have a relationship between pacing pulse voltage and pacing pulse width. A CRM device, such as a pacemaker, may have the capability to adjust the pacing pulse energy by modifying either or both the pulse width and the pulse amplitude to produce capture. Identical changes in pacing pulse energy may cause different changes when applied to identical therapies using different electrode combinations. Determining a strength-duration plot910for a plurality of electrode combinations can aid in selecting an electrode combination, as the strength-duration plots can be a basis for comparison of beneficial and non-beneficial pacing characteristics and parameters. FIG.9provides graphs illustrating a strength-duration plot910associated with capture and a strength-duration plot920associated with an undesirable activation. A pacing pulse having a pulse width of Wtrequires a pulse amplitude of Vc1to produce capture. A pacing pulse having pulse width W1and pulse amplitude Vc1exceeds the voltage threshold, Vu1, for an undesirable activation. If the pulse width is increased to W2, the voltage required for capture, Vc2, is less than the voltage required for undesirable activation, Vu2. Therefore, pacing pulses can be delivered at the pacing energy associated with W2, Vc2to provide capture of the heart without causing the undesirable activation. The shaded area950between the plots910,920indicates the energy parameter values that may be used to produce capture and avoid undesirable activation. If multiple-point strength duration plots are known for capture and undesirable activation, the energy parameters for a particular electrode combination may be determined based on these two plots. For example, returning toFIG.9, the area950to the right of the intersection951of the strength-duration plots910,920defines the set of energy parameter values that produce capture while avoiding undesirable stimulation. Energy parameter values that fall within this region950, or within a modified region960that includes appropriate safety margins for pacing961and undesirable activation962, may be selected. According to some embodiments of the present invention, various parameters and/or characteristics, such as ranges, windows, and/or areas, of the plots ofFIG.9may be used in selecting an electrode combination. For example, equivalent strength-duration plots910and strength-duration plot920associated with an undesirable activation may be generated for each of a plurality of electrode combinations. Then the respective areas960and/or950may be compared between the electrode combinations, the comparison used to determine an order for the electrode combinations. Because the parameters represented by area960represent the available ranges of voltage and pulse width within an acceptable safety margin, electrode combinations with relatively large area960may be favorably ranked in an electrode combination order. A comparison can also be made between various electrode combinations of the voltage ranges, at a specific pulse width, that captures the heart without causing undesirable stimulation, with priority in the order being given to electrode combinations with the largest ranges. Strength-duration plots, such as plots910and920, can provide other parameters for evaluating and comparing to order electrode combinations and select an electrode combination. For example, criteria for selecting an electrode combination may specify that the selected combination is the combination with the lowest capture threshold that does not exceed a certain pulse width. Methods and systems for determining and using strength-duration relationships are described in United States Publication No. 20080071318, which is incorporated herein by reference in its entirety. The flowchart ofFIG.10illustrates a process1000for determining capture thresholds for a plurality of electrode combinations. The process1000includes initiating1010a step down threshold test, and setting an initial pacing energy. The process1000further includes delivering1020a pacing pulse at pacing energy to an electrode combination. The electrode combination may be an initial electrode combination. The pacing energy may be the initial pacing energy, particularly in the case where step1020has not been previously performed. After delivery1020of the pacing pulse, the process monitors to determine whether loss of capture is detected1030. If loss of capture is detected, then the process1000proceeds to determining1040other beneficial parameters, and storing the beneficial parameter information. The other beneficial parameters determined could be any of the beneficial parameters discussed herein or known in the art that support cardiac function consistent with a prescribed therapy. Examples of such beneficial parameters include electrode combination responsiveness to CRT, low battery consumption, and cardiac output, among other parameters. The process determines1060non-beneficial parameters, and stores the non-beneficial parameter information. The non-beneficial parameters determined could be any of the non-beneficial parameters discussed herein or known in the art. Examples of such non-beneficial parameters include extra-cardiac stimulation and anodal stimulation, among other parameters. After determining1060non-beneficial parameters, the process1000proceeds to decrease1070the electrode combination energy. After the electrode combination energy is decreased1070, a pacing pulse is delivered1020using the electrode combination using the energy level to which the energy level was decreased. In this way, steps1020,1030,1040,1060, and1070can be repeated, decreasing1070the pacing energy for the electrode combination until loss of capture is detected1030. As such, steps1010,1020,1030,1040,1060, and1070can scan for a capture threshold, the capture threshold being stored1050in memory for the electrode combination once it has been identified by a detected loss of capture1030. After detecting loss of capture1030and storing1050the capture threshold for the electrode combination, the process1000evaluates whether there are more electrode combinations to test1090. If there are more electrode combinations to test, then the process1000switches1080to the next electrode combination and repeats steps1020,1030,1040,1060, and1070to determine the capture threshold for the next electrode combination. When there are no more electrode combinations to test1090, the test ends1095. As such, process1000can be used to determine the capture threshold, beneficial parameters, and non-beneficial parameters for one or more of a plurality of electrode combinations. This information can then be used in conjunction with other methods disclosed herein to select an electrode combination, among other things. Although the process1000ofFIG.10used a step down capture threshold test, in other implementations, the capture threshold test may involve a step-up capture threshold test, a binary search test, or may involve other capture threshold testing methods as are known in the art. Similar methods to those discussed herein can be used to determine other parameter thresholds. The capture threshold of an electrode combination may change over time due to various physiological effects. Testing the capture threshold for a particular electrode combination may be implemented periodically or on command to ensure that the pacing energy delivered to the particular electrode combination remains sufficient to produce capture. The flowchart ofFIG.11illustrates a process1100for automatically updating a therapy electrode combination after an initial selection. Beneficial parameters and non-beneficial parameters are measured or estimated1110for a plurality of electrode combinations. Step1110can be scheduled to occur at implant, or could be initiated after implant. As in other embodiments discussed herein, the beneficial parameters can be parameters that support cardiac function consistent with a prescribed therapy and the non-beneficial parameters can be parameters that do not support cardiac function consistent with a prescribed therapy. After the beneficial and non-beneficial parameters are evaluated1110, the beneficial and non-beneficial parameters are compared1120. Based on the comparison, electrode combinations are selected1130. Therapy is then delivered1140using the selected electrode combinations. After therapy is delivered1140using the selected electrode combinations, the process1100evaluates whether a periodic update is required1150. A periodic update could be mandated by a programmed update schedule, or may be performed upon command. If no periodic update is required, then therapy continues to be delivered1140using the selected electrode combinations. However, if a periodic update is required, then the process automatically re-measures or re-estimates1160beneficial and non-beneficial parameters for the plurality of electrode combinations. Automatically re-measuring or re-estimating1160could be performed by a method similar or identical to the method used to measure or estimate beneficial parameters1110at implant. After re-measuring or re-estimating the beneficial and non-beneficial parameters, the re-measured or re-estimated parameters are compared1120, such that electrode combinations may then be selected1130and used to deliver1140a therapy. The flowchart ofFIG.12illustrates a process1200for ranking electrode combinations and changing the electrode combination being used for therapy delivery using the ranking. The process1200begins with measuring or estimating1210beneficial parameters and non-beneficial parameters for a plurality of electrode combinations. As in other embodiments discussed herein, the beneficial parameters can be parameters that support cardiac function consistent with a prescribed therapy and the non-beneficial parameters can be parameters that do not support cardiac function consistent with a prescribed therapy. After the beneficial and non-beneficial parameters are measured or estimated1210, the beneficial and non-beneficial parameters are ranked1220. Ranking can include establishing, a hierarchical relationship between a plurality of electrode combinations based on parameters. In such embodiments, the highest ranked electrode combination maybe the electrode combination with most favorable beneficial parameter and non-beneficial parameter values relative to other electrode combinations, which are likewise ordered in a rank. Based on the ranking, electrode combinations are selected1230. Therapy is then delivered1240using the selected electrode combinations. After therapy is delivered1240using the selected electrode combinations, the process1200senses1250for one or more conditions indicative of a change in the patient's status. In some embodiments of the invention, a sensed change in the patient status could include a sensed change in activity level, posture, respiration, electrode position, body fluid chemistry, blood or airway oxygen level, blood pressure, hydration, hemodynamics, or electrode combination impedance, among other events. If no status change is detected1260, then therapy continues to be delivered1240using the selected electrode combinations. However, if a status change is detected1260, then the process selects1270the next ranked electrode combination or sites for therapy delivery and delivers1240therapy via the selected site or sites. According to the particular process1200ofFIG.12, no re-measuring or re-estimating of parameters is needed, as the process uses the ranking determined in step1220. Although the embodiment ofFIG.12uses a ranking method to order the electrode combinations, other ordering methods are contemplated within the scope of the present invention. Ordering may include grouping, attributing, categorizing, or other processes that are based on parameter evaluations. Ordering can include grouping a plurality of electrode combinations according to one or more of the parameters that support cardiac function and one or more of the parameters that do not support cardiac function, consistent with a prescribed therapy. For example, the electrode combinations of the plurality of electrode combinations can be grouped in various categories, each category associated with a different type of detected undesirable stimulation (ex. phrenic nerve, anodal stimulation, excessive impedance) and/or parameter that does support cardiac function (ex. low capture threshold; low impedance). In some applications, it is desirable to select pacing electrodes based on a number of interrelated parameters. For example, in cardiac resynchronization therapy (CRT) which involves left ventricular pacing, it is desirable to deliver pacing pulses that capture the heart tissue to produce a left ventricular contraction without unwanted stimulation to other body structures. However, the pacing therapy may be ineffective or less effective if pacing is delivered to a site that is a non-responder site to CRT. Thus, selection of a responder site for therapy delivery should also be taken into account. In some embodiments, the electrode selection may consider several inter-related parameters, ordering, ranking, grouping and/or recommending the electrode combinations to achieve specific therapeutic goals. In some embodiments, the ordering, ranking, grouping and/or recommending may be performed using a multivariable optimization procedure. Electrode selection using some level of algorithmic automaticity is particularly useful when a large number of electrode combinations are possible in conjunction with the evaluation of several parameters. Ordering can be based on the evaluations of any number of different parameters that support cardiac function consistent with a prescribed therapy and any number of parameters that do not support cardiac function consistent with a prescribed therapy. For example, ordering can be based on a comparison of the respective evaluations of two different parameters that each support cardiac function consistent with a prescribed therapy and one or more parameters that do not support cardiac function consistent with a prescribed therapy, each evaluation conducted for each electrode combination of a plurality of electrode combinations. In this example, the two different parameters that support cardiac function consistent with a prescribed therapy could be left ventricular capture threshold and improved hemodynamics, while the parameter that does not support cardiac function consistent with a prescribed therapy could be phrenic nerve activation. Evaluating, ordering, and other comparisons of the present invention based on multiple parameters can include one, two, three, four, five, or more different parameters that support cardiac function consistent with a prescribed therapy and one, two, three, four, five, or more different parameters that do not support cardiac function consistent with a prescribed therapy. In some embodiments of the invention, not all possible electrode combinations will be evaluated. For example, a very high capture threshold associated with a first electrode combination may indicate that another electrode combination using the cathode or the anode of the first electrode combination may as well have a very high capture threshold. In such cases, evaluations of parameters for electrode combinations using those electrodes and/or electrodes proximate one of those electrodes will not be conducted. Forgoing evaluation of those electrode combinations likely to perform poorly based on the performance of similar electrode combinations can save evaluation time, energy, and avoid unnecessary stimulation while testing patient response. The forgoing of evaluating certain electrode combinations can be based on any of the other parameters discussed herein. The components, functionality, and structural configurations depicted herein are intended to provide an understanding of various features and combination of features that may be incorporated in an implantable pacemaker/defibrillator. It is understood that a wide variety of cardiac monitoring and/or stimulation device configurations are contemplated, ranging from relatively sophisticated to relatively simple designs. As such, particular cardiac device configurations may include particular features as described herein, while other such device configurations may exclude particular features described herein. Various modifications and additions can be made to the preferred embodiments discussed hereinabove without departing from the scope of the present invention. Accordingly, the scope of the present invention should not be limited by the particular embodiments described above, but should be defined only by the claims set forth below and equivalents thereof.
94,723
11857796
DETAILED DESCRIPTION In various implementations, systems and methods are disclosed for applying one or more electrical impulses to targeted excitable tissue, such as nerves, for treating chronic pain, inflammation, arthritis, sleep apnea, seizures, incontinence, pain associated with cancer, incontinence, problems of movement initiation and control, involuntary movements, vascular insufficiency, heart arrhythmias, obesity, diabetes, craniofacial pain, such as migraines or cluster headaches, and other disorders. In certain embodiments, a wireless stimulation device may be used to send electrical energy to targeted nerve tissue by using remote radio frequency (RF) energy with neither cables nor inductive coupling to power the passive implanted wireless stimulation device. The targeted nerves can include, but are not limited to, the spinal cord and surrounding areas, including the dorsal horn, dorsal root ganglion, the exiting nerve roots, nerve ganglions, the dorsal column fibers and the peripheral nerve bundles leaving the dorsal column and brain, such as the vagus, occipital, trigeminal, hypoglossal, sacral, coccygeal nerves and the like. A wireless stimulation system can include an implantable, wireless stimulation device with one or more electrodes and an enclosure that houses one or more conductive antennas (for example, dipole or patch antennas), and internal circuitry for frequency waveform and electrical energy rectification. The system may further comprise an external controller and antenna for sending radio frequency or microwave energy from an external source to the implantable device with neither cables nor inductive coupling to provide power. In various embodiments, the implantable device is powered wirelessly (and therefore does not require a wired connection) and contains the circuitry necessary to receive the pulse instructions from a source external to the body. For example, various embodiments employ internal dipole (or other) antenna configuration(s) to receive RF power through electrical radiative coupling. This allows such devices to produce electrical currents capable of stimulating nerve bundles without a physical connection to an implantable pulse generator (IPG) or use of an inductive coil. Further descriptions of exemplary wireless systems for providing neural stimulation to a patient can be found in commonly-assigned, co-pending published PCT applications PCT/US2012/23029 filed Jan. 27, 2012, PCT/US2012/32200 filed Apr. 11, 2012, PCT/US2012/48903, filed Jan. 28, 2012, PCT/US2012/50633, filed Aug. 12, 2012, PCT/US2012/55746, filed Sep. 15, 2012 and PCT/US2013/073326, filed Dec. 5, 2013, the complete disclosures of which have been previously incorporated by reference. FIG.1depicts a high-level diagram of an example of a wireless stimulation system. The wireless stimulation system may include four major components, namely, a programmer module102, a RF pulse generator module106, a transmit (TX) antenna110(for example, a patch antenna, slot antenna, or a dipole antenna), and an implantable wireless stimulation device114. The programmer module102may be a computer device, such as a smart phone, running a software application that supports a wireless connection114, such as Bluetooth®. The application can enable the user to view the system status and diagnostics, change various parameters, increase/decrease the desired stimulus amplitude of the electrode pulses, and adjust feedback sensitivity of the RF pulse generator module106, among other functions. The RF pulse generator module106may include communication electronics that support the wireless connection104, the stimulation circuitry, and the battery to power the generator electronics. In some implementations, the RF pulse generator module106includes the TX antenna embedded into its packaging form factor while, in other implementations, the TX antenna is connected to the RF pulse generator module106through a wired connection108or a wireless connection (not shown). The TX antenna110may be coupled directly to tissue to create an electric field that powers the implanted neural stimulator module114. The TX antenna110communicates with the implanted neural stimulator module114through an RF interface. For instance, the TX antenna110radiates an RF transmission signal that is modulated and encoded by the RF pulse generator module110. The implanted wireless stimulation device of module114contains one or more antennas, such as dipole antenna(s), to receive and transmit through RF interface112. In particular, the coupling mechanism between antenna110and the one or more antennas on the implanted wireless stimulation device of module114utilizes electrical radiative coupling and not inductive coupling. In other words, the coupling is through an electric field rather than a magnetic field. Through this electrical radiative coupling, the TX antenna110can provide an input signal to the implanted stimulation module114. This input signal contains energy and may contain information encoding stimulus waveforms to be applied at the electrodes of the implanted stimulation module114. In some implementations, the power level of this input signal directly determines an applied amplitude (for example, power, current, or voltage) of the one or more electrical pulses created using the electrical energy contained in the input signal. Within the implanted wireless stimulation device114are components for demodulating the RF transmission signal, and electrodes to deliver the stimulation to surrounding neuronal tissue. The RF pulse generator module106can be implanted subcutaneously, or it can be worn external to the body. When external to the body, the RF generator module106can be incorporated into a belt or harness design to allow for electric radiative coupling through the skin and underlying tissue to transfer power and/or control parameters to the implanted wireless stimulation device module114. In either event, receiver circuit(s) internal to the wireless stimulation device114(or connector device1400shown inFIG.14A) can capture the energy radiated by the TX antenna110and convert this energy to an electrical waveform. The receiver circuit(s) may further modify the waveform to create an electrical pulse suitable for the stimulation of neural tissue. In some implementations, the RF pulse generator module106can remotely control the stimulus parameters (that is, the parameters of the electrical pulses applied to the neural tissue) and monitor feedback from the wireless stimulation device114based on RF signals received from the implanted wireless stimulation device module114. A feedback detection algorithm implemented by the RF pulse generator module106can monitor data sent wirelessly from the implanted wireless stimulation device module114, including information about the energy that the implanted wireless stimulation device114is receiving from the RF pulse generator and information about the stimulus waveform being delivered to the electrode pads. In order to provide an effective therapy for a given medical condition, the system can be tuned to provide the optimal amount of excitation or inhibition to the nerve fibers by electrical stimulation. A closed loop feedback control method can be used in which the output signals from the implanted wireless stimulation device114are monitored and used to determine the appropriate level of neural stimulation current for maintaining effective neuronal activation, or, in some cases, the patient can manually adjust the output signals in an open loop control method. FIG.2depicts a detailed diagram of an example of the wireless stimulation system. As depicted, the programming module102may comprise user input system202and communication subsystem208. The user input system221may allow various parameter settings to be adjusted (in some cases, in an open loop fashion) by the user in the form of instruction sets. The communication subsystem208may transmit these instruction sets (and other information) via the wireless connection104, such as Bluetooth or Wi-Fi, to the RF pulse generator module106, as well as receive data from module106. For instance, the programmer module102, which can be utilized for multiple users, such as a patient's control unit or clinician's programmer unit, can be used to send stimulation parameters to the RF pulse generator module106. The stimulation parameters that can be controlled may include pulse amplitude, pulse frequency, and pulse width in the ranges shown in Table 1. In this context the term pulse refers to the phase of the waveform that directly produces stimulation of the tissue; the parameters of the charge-balancing phase (described below) can similarly be controlled. The patient and/or the clinician can also optionally control overall duration and pattern of treatment. Stimulation Parameter Table 1Pulse Amplitude:0 to 20 mAPulse Frequency:0 to 10000 HzPulse Width:0 to 2 ms The RF pulse generator module114may be initially programmed to meet the specific parameter settings for each individual patient during the initial implantation procedure. Because medical conditions or the body itself can change over time, the ability to re-adjust the parameter settings may be beneficial to ensure ongoing efficacy of the neural modulation therapy. The programmer module102may be functionally a smart device and associated application. The smart device hardware may include a CPU206and be used as a vehicle to handle touchscreen input on a graphical user interface (GUI)204, for processing and storing data. The RF pulse generator module106may be connected via wired connection108to an external TX antenna110. Alternatively, both the antenna and the RF pulse generator are located subcutaneously (not shown). The signals sent by RF pulse generator module106to the implanted wireless stimulation device114may include both power and parameter-setting attributes in regards to stimulus waveform, amplitude, pulse width, and frequency. The RF pulse generator module106can also function as a wireless receiving unit that receives feedback signals from the implanted wireless stimulation device114. To that end, the RF pulse generator module106may contain microelectronics or other circuitry to handle the generation of the signals transmitted to the device114as well as handle feedback signals, such as those from the device114. For example, the RF pulse generator module106may comprise controller subsystem214, high-frequency oscillator218, RF amplifier216, a RF switch, and a feedback subsystem212. The controller subsystem214may include a CPU230to handle data processing, a memory subsystem228such as a local memory, communication subsystem234to communicate with programmer module102(including receiving stimulation parameters from programmer module), pulse generator circuitry236, and digital/analog (D/A) converters232. The controller subsystem214may be used by the patient and/or the clinician to control the stimulation parameter settings (for example, by controlling the parameters of the signal sent from RF pulse generator module106to the device114). These parameter settings can affect, for example, the power, current level, or shape of the one or more electrical pulses. The programming of the stimulation parameters can be performed using the programming module102, as described above, to set the repetition rate, pulse width, amplitude, and waveform that will be transmitted by RF energy to the receive (RX) antenna238, typically a dipole antenna (although other types may be used), in the implanted wireless stimulation device214. The clinician may have the option of locking and/or hiding certain settings within the programmer interface, thus limiting the patient's ability to view or adjust certain parameters because adjustment of certain parameters may require detailed medical knowledge of neurophysiology, neuroanatomy, protocols for neural modulation, and safety limits of electrical stimulation. The controller subsystem214may store received parameter settings in the local memory subsystem228, until the parameter settings are modified by new input data received from the programming module102. The CPU206may use the parameters stored in the local memory to control the pulse generator circuitry236to generate a stimulus waveform that is modulated by a high frequency oscillator218in the range from 300 MHz to 8 GHz (preferably between about 700 MHz and 5.8 GHz and more preferably between about 800 MHz and 1.3 GHz). The resulting RF signal may then be amplified by RF amplifier226and then sent through an RF switch223to the TX antenna110to reach through depths of tissue to the RX antenna238. In some implementations, the RF signal sent by TX antenna110may simply be a power transmission signal used by the wireless stimulation device module114to generate electric pulses. In other implementations, a telemetry signal may also be transmitted to the wireless stimulation device module114to send instructions about the various operations of the wireless stimulation device module114. The telemetry signal may be sent by the modulation of the carrier signal (through the skin if external, or through other body tissues if the pulse generator module106is implanted subcutaneously). The telemetry signal is used to modulate the carrier signal (a high frequency signal) that is coupled onto the implanted antenna(s)238and does not interfere with the input received on the same device to power the wireless stimulation device. In one embodiment the telemetry signal and powering signal are combined into one signal, where the RF telemetry signal is used to modulate the RF powering signal, and thus the wireless stimulation device is powered directly by the received telemetry signal; separate subsystems in the wireless stimulation device harness the power contained in the signal and interpret the data content of the signal. The RF switch223may be a multipurpose device such as a dual directional coupler, which passes the relatively high amplitude, extremely short duration RF pulse to the TX antenna110with minimal insertion loss while simultaneously providing two low-level outputs to feedback subsystem212; one output delivers a forward power signal to the feedback subsystem212, where the forward power signal is an attenuated version of the RF pulse sent to the TX antenna110, and the other output delivers a reverse power signal to a different port of the feedback subsystem212, where reverse power is an attenuated version of the reflected RF energy from the TX Antenna110. During the on-cycle time (when an RF signal is being transmitted to wireless stimulation device114), the RF switch223is set to send the forward power signal to feedback subsystem. During the off-cycle time (when an RF signal is not being transmitted to the wireless stimulation device module114), the RF switch223can change to a receiving mode in which the reflected RF energy and/or RF signals from the wireless stimulation device module114are received to be analyzed in the feedback subsystem212. The feedback subsystem212of the RF pulse generator module106may include reception circuitry to receive and extract telemetry or other feedback signals from the wireless stimulation device114and/or reflected RF energy from the signal sent by TX antenna110. The feedback subsystem may include an amplifier226, a filter224, a demodulator222, and an A/D converter220. The feedback subsystem212receives the forward power signal and converts this high-frequency AC signal to a DC level that can be sampled and sent to the controller subsystem214. In this way the characteristics of the generated RF pulse can be compared to a reference signal within the controller subsystem214. If a disparity (error) exists in any parameter, the controller subsystem214can adjust the output to the RF pulse generator106. The nature of the adjustment can be, for example, proportional to the computed error. The controller subsystem214can incorporate additional inputs and limits on its adjustment scheme such as the signal amplitude of the reverse power and any predetermined maximum or minimum values for various pulse parameters. The reverse power signal can be used to detect fault conditions in the RF-power delivery system. In an ideal condition, when TX antenna110has perfectly matched impedance to the tissue that it contacts, the electromagnetic waves generated from the RF pulse generator106pass unimpeded from the TX antenna110into the body tissue. However, in real-world applications a large degree of variability may exist in the body types of users, types of clothing worn, and positioning of the antenna110relative to the body surface. Since the impedance of the antenna110depends on the relative permittivity of the underlying tissue and any intervening materials, and also depends on the overall separation distance of the antenna from the skin, in any given application there can be an impedance mismatch at the interface of the TX antenna110with the body surface. When such a mismatch occurs, the electromagnetic waves sent from the RF pulse generator106are partially reflected at this interface, and this reflected energy propagates backward through the antenna feed. The dual directional coupler RF switch223may prevent the reflected RF energy propagating back into the amplifier226, and may attenuate this reflected RF signal and send the attenuated signal as the reverse power signal to the feedback subsystem212. The feedback subsystem212can convert this high-frequency AC signal to a DC level that can be sampled and sent to the controller subsystem214. The controller subsystem214can then calculate the ratio of the amplitude of the reverse power signal to the amplitude of the forward power signal. The ratio of the amplitude of reverse power signal to the amplitude level of forward power may indicate severity of the impedance mismatch. In order to sense impedance mismatch conditions, the controller subsystem214can measure the reflected-power ratio in real time, and according to preset thresholds for this measurement, the controller subsystem214can modify the level of RF power generated by the RF pulse generator106. For example, for a moderate degree of reflected power the course of action can be for the controller subsystem214to increase the amplitude of RF power sent to the TX antenna110, as would be needed to compensate for slightly non-optimum but acceptable TX antenna coupling to the body. For higher ratios of reflected power, the course of action can be to prevent operation of the RF pulse generator106and set a fault code to indicate that the TX antenna110has little or no coupling with the body. This type of reflected-power fault condition can also be generated by a poor or broken connection to the TX antenna. In either case, it may be desirable to stop RF transmission when the reflected-power ratio is above a defined threshold, because internally reflected power can result in unwanted heating of internal components, and this fault condition means the system cannot deliver sufficient power to the implanted wireless stimulation device and thus cannot deliver therapy to the user. The controller242of the wireless stimulation device114may transmit informational signals, such as a telemetry signal, through the antenna238to communicate with the RF pulse generator module106during its receive cycle. For example, the telemetry signal from the wireless stimulation device114may be coupled to the modulated signal on the dipole antenna(s)238, during the on and off state of the transistor circuit to enable or disable a waveform that produces the corresponding RF bursts necessary to transmit to the external (or remotely implanted) pulse generator module106. The antenna(s)238may be connected to electrodes254in contact with tissue to provide a return path for the transmitted signal. An A/D (not shown) converter can be used to transfer stored data to a serialized pattern that can be transmitted on the pulse-modulated signal from the internal antenna(s)238of the wireless stimulation device114. A telemetry signal from the implanted wireless stimulation device module114may include stimulus parameters such as the power or the amplitude of the current that is delivered to the tissue from the electrodes. The feedback signal can be transmitted to the RF pulse generator module116to indicate the strength of the stimulus at the nerve bundle by means of coupling the signal to the implanted RX antenna238, which radiates the telemetry signal to the external (or remotely implanted) RF pulse generator module106. The feedback signal can include either or both an analog and digital telemetry pulse modulated carrier signal. Data such as stimulation pulse parameters and measured characteristics of stimulator performance can be stored in an internal memory device within the implanted stimulation device114, and sent on the telemetry signal. The frequency of the carrier signal may be in the range of at 300 MHz to 8 GHz (preferably between about 700 MHz and 5.8 GHz and more preferably between about 800 MHz and 1.3 GHz). In the feedback subsystem212, the telemetry signal can be down modulated using demodulator222and digitized by being processed through an analog to digital (A/D) converter220. The digital telemetry signal may then be routed to a CPU230with embedded code, with the option to reprogram, to translate the signal into a corresponding current measurement in the tissue based on the amplitude of the received signal. The CPU230of the controller subsystem214can compare the reported stimulus parameters to those held in local memory228to verify the wireless stimulation device114delivered the specified stimuli to tissue. For example, if the wireless stimulation device reports a lower current than was specified, the power level from the RF pulse generator module106can be increased so that the implanted wireless stimulation device114will have more available power for stimulation. The implanted wireless stimulation device114can generate telemetry data in real time, for example, at a rate of 8 Kbits per second. All feedback data received from the implanted module114can be logged against time and sampled to be stored for retrieval to a remote monitoring system accessible by the health care professional for trending and statistical correlations. The sequence of remotely programmable RF signals received by the internal antenna(s)238may be conditioned into waveforms that are controlled within the implantable wireless stimulation device114by the control subsystem242and routed to the appropriate electrodes254that are placed in proximity to the tissue to be stimulated. For instance, the RF signal transmitted from the RF pulse generator module106may be received by RX antenna238and processed by circuitry, such as waveform conditioning circuitry240, within the implanted wireless stimulation device module114to be converted into electrical pulses applied to the electrodes254through electrode interface252. In some implementations, the implanted wireless stimulation device114contains between two to sixteen electrodes254. The waveform conditioning circuitry240may include a rectifier244, which rectifies the signal received by the RX antenna238. The rectified signal may be fed to the controller242for receiving encoded instructions from the RF pulse generator module106. The rectifier signal may also be fed to a charge balance component246that is configured to create one or more electrical pulses based such that the one or more electrical pulses result in a substantially zero net charge at the one or more electrodes (that is, the pulses are charge balanced). The charge-balanced pulses are passed through the current limiter248to the electrode interface252, which applies the pulses to the electrodes254as appropriate. The current limiter248insures the current level of the pulses applied to the electrodes254is not above a threshold current level. In some implementations, an amplitude (for example, current level, voltage level, or power level) of the received RF pulse directly determines the amplitude of the stimulus. In this case, it may be particularly beneficial to include current limiter248to prevent excessive current or charge being delivered through the electrodes, although current limiter248may be used in other implementations where this is not the case. Generally, for a given electrode having several square millimeters surface area, it is the charge per phase that should be limited for safety (where the charge delivered by a stimulus phase is the integral of the current). But, in some cases, the limit can instead be placed on the current, where the maximum current multiplied by the maximum possible pulse duration is less than or equal to the maximum safe charge. More generally, the limiter248acts as a charge limiter that limits a characteristic (for example, current or duration) of the electrical pulses so that the charge per phase remains below a threshold level (typically, a safe-charge limit). In the event the implanted wireless stimulation device114receives a “strong” pulse of RF power sufficient to generate a stimulus that would exceed the predetermined safe-charge limit, the current limiter248can automatically limit or “clip” the stimulus phase to maintain the total charge of the phase within the safety limit. The current limiter248may be a passive current limiting component that cuts the signal to the electrodes254once the safe current limit (the threshold current level) is reached. Alternatively, or additionally, the current limiter248may communicate with the electrode interface252to turn off all electrodes254to prevent tissue damaging current levels. A clipping event may trigger a current limiter feedback control mode. The action of clipping may cause the controller to send a threshold power data signal to the pulse generator106. The feedback subsystem212detects the threshold power signal and demodulates the signal into data that is communicated to the controller subsystem214. The controller subsystem214algorithms may act on this current-limiting condition by specifically reducing the RF power generated by the RF pulse generator, or cutting the power completely. In this way, the pulse generator106can reduce the RF power delivered to the body if the implanted wireless stimulation device114reports it is receiving excess RF power. The controller250of the stimulator205may communicate with the electrode interface252to control various aspects of the electrode setup and pulses applied to the electrodes254. The electrode interface252may act as a multiplex and control the polarity and switching of each of the electrodes254. For instance, in some implementations, the wireless stimulator106has multiple electrodes254in contact with tissue, and for a given stimulus the RF pulse generator module106can arbitrarily assign one or more electrodes to 1) act as a stimulating electrode, 2) act as a return electrode, or 3) be inactive by communication of assignment sent wirelessly with the parameter instructions, which the controller250uses to set electrode interface252as appropriate. It may be physiologically advantageous to assign, for example, one or two electrodes as stimulating electrodes and to assign all remaining electrodes as return electrodes. Also, in some implementations, for a given stimulus pulse, the controller250may control the electrode interface252to divide the current arbitrarily (or according to instructions from pulse generator module106) among the designated stimulating electrodes. This control over electrode assignment and current control can be advantageous because in practice the electrodes254may be spatially distributed along various neural structures, and through strategic selection of the stimulating electrode location and the proportion of current specified for each location, the aggregate current distribution in tissue can be modified to selectively activate specific neural targets. This strategy of current steering can improve the therapeutic effect for the patient. In another implementation, the time course of stimuli may be arbitrarily manipulated. A given stimulus waveform may be initiated at a time T start and terminated at a time T_final, and this time course may be synchronized across all stimulating and return electrodes; further, the frequency of repetition of this stimulus cycle may be synchronous for all the electrodes. However, controller250, on its own or in response to instructions from pulse generator106, can control electrode interface252to designate one or more subsets of electrodes to deliver stimulus waveforms with non-synchronous start and stop times, and the frequency of repetition of each stimulus cycle can be arbitrarily and independently specified. For example, a stimulator having eight electrodes may be configured to have a subset of five electrodes, called set A, and a subset of three electrodes, called set B. Set A might be configured to use two of its electrodes as stimulating electrodes, with the remainder being return electrodes. Set B might be configured to have just one stimulating electrode. The controller250could then specify that set A deliver a stimulus phase with 3 mA current for a duration of 200 us followed by a 400 us charge-balancing phase. This stimulus cycle could be specified to repeat at a rate of 60 cycles per second. Then, for set B, the controller250could specify a stimulus phase with 1 mA current for duration of 500 us followed by a 800 us charge-balancing phase. The repetition rate for the set-B stimulus cycle can be set independently of set A, say for example it could be specified at 25 cycles per second. Or, if the controller250was configured to match the repetition rate for set B to that of set A, for such a case the controller250can specify the relative start times of the stimulus cycles to be coincident in time or to be arbitrarily offset from one another by some delay interval. In some implementations, the controller250can arbitrarily shape the stimulus waveform amplitude, and may do so in response to instructions from pulse generator106. The stimulus phase may be delivered by a constant-current source or a constant-voltage source, and this type of control may generate characteristic waveforms that are static, e.g. a constant-current source generates a characteristic rectangular pulse in which the current waveform has a very steep rise, a constant amplitude for the duration of the stimulus, and then a very steep return to baseline. Alternatively, or additionally, the controller250can increase or decrease the level of current at any time during the stimulus phase and/or during the charge-balancing phase. Thus, in some implementations, the controller250can deliver arbitrarily shaped stimulus waveforms such as a triangular pulse, sinusoidal pulse, or Gaussian pulse for example. Similarly, the charge-balancing phase can be arbitrarily amplitude-shaped, and similarly a leading anodic pulse (prior to the stimulus phase) may also be amplitude-shaped. As described above, the wireless stimulation device114may include a charge-balancing component246. Generally, for constant current stimulation pulses, pulses should be charge balanced by having the amount of cathodic current should equal the amount of anodic current, which is typically called biphasic stimulation. Charge density is the amount of current times the duration it is applied, and is typically expressed in the units uC/cm2. In order to avoid the irreversible electrochemical reactions such as pH change, electrode dissolution as well as tissue destruction, no net charge should appear at the electrode-electrolyte interface, and it is generally acceptable to have a charge density less than 30 uC/cm2. Biphasic stimulating current pulses ensure that no net charge appears at the electrode after each stimulation cycle and the electrochemical processes are balanced to prevent net dc currents. The wireless stimulation device114may be designed to ensure that the resulting stimulus waveform has a net zero charge. Charge balanced stimuli are thought to have minimal damaging effects on tissue by reducing or eliminating electrochemical reaction products created at the electrode-tissue interface. A stimulus pulse may have a negative-voltage or current, called the cathodic phase of the waveform. Stimulating electrodes may have both cathodic and anodic phases at different times during the stimulus cycle. An electrode that delivers a negative current with sufficient amplitude to stimulate adjacent neural tissue is called a “stimulating electrode.” During the stimulus phase the stimulating electrode acts as a current sink. One or more additional electrodes act as a current source and these electrodes are called “return electrodes.” Return electrodes are placed elsewhere in the tissue at some distance from the stimulating electrodes. When a typical negative stimulus phase is delivered to tissue at the stimulating electrode, the return electrode has a positive stimulus phase. During the subsequent charge-balancing phase, the polarities of each electrode are reversed. In some implementations, the charge balance component246uses a blocking capacitor(s) placed electrically in series with the stimulating electrodes and body tissue, between the point of stimulus generation within the stimulator circuitry and the point of stimulus delivery to tissue. In this manner, a resistor-capacitor (RC) network may be formed. In a multi-electrode stimulator, one charge-balance capacitor(s) may be used for each electrode or a centralized capacitor(s) may be used within the stimulator circuitry prior to the point of electrode selection. The RC network can block direct current (DC), however it can also prevent low-frequency alternating current (AC) from passing to the tissue. The frequency below which the series RC network essentially blocks signals is commonly referred to as the cutoff frequency, and in one embodiment the design of the stimulator system may ensure the cutoff frequency is not above the fundamental frequency of the stimulus waveform. In this embodiment as disclosed herein, the wireless stimulator may have a charge-balance capacitor with a value chosen according to the measured series resistance of the electrodes and the tissue environment in which the stimulator is implanted. By selecting a specific capacitance value the cutoff frequency of the RC network in this embodiment is at or below the fundamental frequency of the stimulus pulse. In other implementations, the cutoff frequency may be chosen to be at or above the fundamental frequency of the stimulus, and in this scenario the stimulus waveform created prior to the charge-balance capacitor, called the drive waveform, may be designed to be non-stationary, where the envelope of the drive waveform is varied during the duration of the drive pulse. For example, in one embodiment, the initial amplitude of the drive waveform is set at an initial amplitude Vi, and the amplitude is increased during the duration of the pulse until it reaches a final value k*Vi. By changing the amplitude of the drive waveform over time, the shape of the stimulus waveform passed through the charge-balance capacitor is also modified. The shape of the stimulus waveform may be modified in this fashion to create a physiologically advantageous stimulus. In some implementations, the wireless stimulation device module114may create a drive-waveform envelope that follows the envelope of the RF pulse received by the receiving dipole antenna(s)238. In this case, the RF pulse generator module106can directly control the envelope of the drive waveform within the wireless stimulation device114, and thus no energy storage may be required inside the stimulator itself. In this implementation, the stimulator circuitry may modify the envelope of the drive waveform or may pass it directly to the charge-balance capacitor and/or electrode-selection stage. In some implementations, the implanted wireless stimulation device114may deliver a single-phase drive waveform to the charge balance capacitor or it may deliver multiphase drive waveforms. In the case of a single-phase drive waveform, for example, a negative-going rectangular pulse, this pulse comprises the physiological stimulus phase, and the charge-balance capacitor is polarized (charged) during this phase. After the drive pulse is completed, the charge balancing function is performed solely by the passive discharge of the charge-balance capacitor, where is dissipates its charge through the tissue in an opposite polarity relative to the preceding stimulus. In one implementation, a resistor within the stimulator facilitates the discharge of the charge-balance capacitor. In some implementations, using a passive discharge phase, the capacitor may allow virtually complete discharge prior to the onset of the subsequent stimulus pulse. In the case of multiphase drive waveforms the wireless stimulator may perform internal switching to pass negative-going or positive-going pulses (phases) to the charge-balance capacitor. These pulses may be delivered in any sequence and with varying amplitudes and waveform shapes to achieve a desired physiological effect. For example, the stimulus phase may be followed by an actively driven charge-balancing phase, and/or the stimulus phase may be preceded by an opposite phase. Preceding the stimulus with an opposite-polarity phase, for example, can have the advantage of reducing the amplitude of the stimulus phase required to excite tissue. In some implementations, the amplitude and timing of stimulus and charge-balancing phases is controlled by the amplitude and timing of RF pulses from the RF pulse generator module106, and in others this control may be administered internally by circuitry onboard the wireless stimulation device114, such as controller250. In the case of onboard control, the amplitude and timing may be specified or modified by data commands delivered from the pulse generator module106. FIG.3is a flowchart showing an example of an operation of the wireless stimulation system. In block302, the wireless stimulation device114is implanted in proximity to nerve bundles and is coupled to the electric field produced by the TX antenna110. That is, the pulse generator module106and the TX antenna110are positioned in such a way (for example, in proximity to the patient) that the TX antenna110is electrically radiatively coupled with the implanted RX antenna238of the wireless stimulation device114. In certain implementations, both the antenna110and the RF pulse generator106are located subcutaneously. In other implementations, the antenna110and the RF pulse generator106are located external to the patient's body. In this case, the TX antenna110may be coupled directly to the patient's skin. Energy from the RF pulse generator is radiated to the implanted wireless stimulation device114from the antenna110through tissue, as shown in block304. The energy radiated may be controlled by the Patient/Clinician Parameter inputs in block301. In some instances, the parameter settings can be adjusted in an open loop fashion by the patient or clinician, who would adjust the parameter inputs in block301to the system. The implanted wireless stimulation device114uses the received energy to generate electrical pulses to be applied to the neural tissue through the electrodes238. For instance, the wireless stimulation device114may contain circuitry that rectifies the received RF energy and conditions the waveform to charge balance the energy delivered to the electrodes to stimulate the targeted nerves or tissues, as shown in block306. The implanted wireless stimulation device114communicates with the pulse generator106by using antenna238to send a telemetry signal, as shown in block308. The telemetry signal may contain information about parameters of the electrical pulses applied to the electrodes, such as the impedance of the electrodes, whether the safe current limit has been reached, or the amplitude of the current that is presented to the tissue from the electrodes. In block310, the RF pulse generator106detects amplifies, filters and modulates the received telemetry signal using amplifier226, filter224, and demodulator222, respectively. The A/D converter230then digitizes the resulting analog signal, as shown in312. The digital telemetry signal is routed to CPU230, which determines whether the parameters of the signal sent to the wireless stimulation device114need to be adjusted based on the digital telemetry signal. For instance, in block314, the CPU230compares the information of the digital signal to a look-up table, which may indicate an appropriate change in stimulation parameters. The indicated change may be, for example, a change in the current level of the pulses applied to the electrodes. As a result, the CPU may change the output power of the signal sent to wireless stimulation device114so as to adjust the current applied by the electrodes254, as shown in block316. Thus, for instance, the CPU230may adjust parameters of the signal sent to the wireless stimulation device114every cycle to match the desired current amplitude setting programmed by the patient, as shown in block318. The status of the stimulator system may be sampled in real time at a rate of 8 Kbits per second of telemetry data. All feedback data received from the wireless stimulation device114can be maintained against time and sampled per minute to be stored for download or upload to a remote monitoring system accessible by the health care professional for trending and statistical correlations in block318. If operated in an open loop fashion, the stimulator system operation may be reduced to just the functional elements shown in blocks302,304,306, and308, and the patient uses their judgment to adjust parameter settings rather than the closed looped feedback from the implanted device. FIG.4is a circuit diagram showing an example of a wireless neural stimulator, such as wireless stimulation device114. This example contains paired electrodes, comprising cathode electrode(s)408and anode electrode(s)410, as shown. When energized, the charged electrodes create a volume conduction field of current density within the tissue. In this implementation, the wireless energy is received through a dipole antenna(s)238. At least four diodes are connected together to form a full wave bridge rectifier402attached to the dipole antenna(s)238. Each diode, up to 100 micrometers in length, uses a junction potential to prevent the flow of negative electrical current, from cathode to anode, from passing through the device when said current does not exceed the reverse threshold. For neural stimulation via wireless power, transmitted through tissue, the natural inefficiency of the lossy material may result in a low threshold voltage. In this implementation, a zero biased diode rectifier results in a low output impedance for the device. A resistor404and a smoothing capacitor406are placed across the output nodes of the bridge rectifier to discharge the electrodes to the ground of the bridge anode. The rectification bridge402includes two branches of diode pairs connecting an anode-to-anode and then cathode to cathode. The electrodes408and410are connected to the output of the charge balancing circuit246. FIG.5is a circuit diagram of another example of a wireless stimulation device114. The example shown inFIG.5includes multiple electrode control and may employ full closed loop control. The wireless stimulation device includes an electrode array254in which the polarity of the electrodes can be assigned as cathodic or anodic, and for which the electrodes can be alternatively not powered with any energy. When energized, the charged electrodes create a volume conduction field of current density within the tissue. In this implementation, the wireless energy is received by the device through the dipole antenna(s)238. The electrode array254is controlled through an on-board controller circuit242that sends the appropriate bit information to the electrode interface252in order to set the polarity of each electrode in the array, as well as power to each individual electrode. The lack of power to a specific electrode would set that electrode in a functional OFF position. In another implementation (not shown), the amount of current sent to each electrode is also controlled through the controller242. The controller current, polarity and power state parameter data, shown as the controller output, is be sent back to the antenna(s)238for telemetry transmission back to the pulse generator module106. The controller242also includes the functionality of current monitoring and sets a bit register counter so that the status of total current drawn can be sent back to the pulse generator module106. At least four diodes can be connected together to form a full wave bridge rectifier302attached to the dipole antenna(s)238. Each diode, up to 100 micrometers in length, uses a junction potential to prevent the flow of negative electrical current, from cathode to anode, from passing through the device when said current does not exceed the reverse threshold. For neural stimulation via wireless power, transmitted through tissue, the natural inefficiency of the lossy material may result in a low threshold voltage. In this implementation, a zero biased diode rectifier results in a low output impedance for the device. A resistor404and a smoothing capacitor406are placed across the output nodes of the bridge rectifier to discharge the electrodes to the ground of the bridge anode. The rectification bridge402may include two branches of diode pairs connecting an anode-to-anode and then cathode to cathode. The electrode polarity outputs, both cathode408and anode410are connected to the outputs formed by the bridge connection. Charge balancing circuitry246and current limiting circuitry248are placed in series with the outputs. FIG.6is a block diagram showing an example of control functions605and feedback functions630of an implantable wireless stimulation device600, such as the ones described above or further below. An example implementation may be a wireless stimulation device module114, as discussed above in association withFIG.2. Control functions605include functions610for polarity switching of the electrodes and functions620for power-on reset. Polarity switching functions610may employ, for example, a polarity routing switch network to assign polarities to electrodes254. The assignment of polarity to an electrode may, for instance, be one of: a cathode (negative polarity), an anode (positive polarity), or a neutral (off) polarity. The polarity assignment information for each of the electrodes254may be contained in the input signal received by implantable wireless stimulation device600through Rx antenna238from RF pulse generator module106. Because a programmer module102may control RF pulse generator module106, the polarity of electrodes254may be controlled remotely by a programmer through programmer module102, as shown inFIG.2. Power-on reset functions620may reset the polarity assignment of each electrode immediately on each power-on event. As will be described in further detail below, this reset operation may cause RF pulse generator module106to transmit the polarity assignment information to the implantable wireless stimulation device600. Once the polarity assignment information is received by the implantable wireless stimulation device600, the polarity assignment information may be stored in a register file, or other short-term memory component. Thereafter the polarity assignment information may be used to configure the polarity assignment of each electrode. If the polarity assignment information transmitted in response to the reset encodes the same polarity state as before the power-on event, then the polarity state of each electrode can be maintained before and after each power-on event. Feedback functions630include functions640for monitoring delivered power to electrodes254and functions650for making impedance diagnosis of electrodes254. For example, delivered power functions640may provide data encoding the amount of power being delivered from electrodes254to the excitable tissue and tissue impedance diagnostic functions650may provide data encoding the diagnostic information of tissue impedance. The tissue impedance is the electrical impedance of the tissue as seen between negative and positive electrodes when a stimulation current is being released between negative and positive electrodes. Feedback functions630may additionally include tissue depth estimate functions660to provide data indicating the overall tissue depth that the input radio frequency (RF) signal from the pulse generator module, such as, for example, RF pulse generator module106, has penetrated before reaching the implanted antenna, such as, for example, RX antenna238, within the wireless implantable neural stimulator600, such as, for example, implanted wireless stimulation device114. For instance, the tissue depth estimate may be provided by comparing the power of the received input signal to the power of the RF pulse transmitted by the RF pulse generator106. The ratio of the power of the received input signal to the power of the RF pulse transmitted by the RF pulse generator106may indicate an attenuation caused by wave propagation through the tissue. For example, the second harmonic described below may be received by the RF pulse generator106and used with the power of the input signal sent by the RF pulse generator to determine the tissue depth. The attenuation may be used to infer the overall depth of implantable wireless stimulation device600underneath the skin. The data from blocks640,650, and660may be transmitted, for example, through Tx antenna110to an implantable RF pulse generator106, as illustrated inFIGS.1and2. As discussed above in association withFIGS.1,2,4, and5, an implantable wireless stimulation device600may utilize rectification circuitry to convert the input signal (e.g., having a carrier frequency within a range from about 300 MHz to about 8 GHz) to a direct current (DC) power to drive the electrodes254. Some implementations may provide the capability to regulate the DC power remotely. Some implementations may further provide different amounts of power to different electrodes, as discussed in further detail below. FIG.7is a schematic showing an example of an implantable wireless stimulation device700with components to implement control and feedback functions as discussed above in association withFIG.6. An RX antenna705receives the input signal. The RX antenna705may be embedded as a dipole, microstrip, folded dipole or other antenna configuration other than a coiled configuration, as described above. The input signal has a carrier frequency in the GHz range and contains electrical energy for powering the wireless implantable neural stimulator700and for providing stimulation pulses to electrodes254. Once received by the antenna705, the input signal is routed to power management circuitry710. Power management circuitry710is configured to rectify the input signal and convert it to a DC power source. For example, the power management circuitry710may include a diode rectification bridge such as the diode rectification bridge402illustrated inFIG.4. The DC power source provides power to stimulation circuitry711and logic power circuitry713. The rectification may utilize one or more full wave diode bridge rectifiers within the power management circuitry710. In one implementation, a resistor can be placed across the output nodes of the bridge rectifier to discharge the electrodes to the ground of the bridge anode, as illustrated by the shunt register404inFIG.7. Turning momentarily toFIG.8, a schematic of an example of a polarity routing switch network800is shown. As discussed above, the cathodic (−) energy and the anodic energy are received at input1(block722) and input2(block723), respectively. Polarity routing switch network800has one of its outputs coupled to an electrode of electrodes254which can include as few as two electrodes, or as many as sixteen electrodes. Eight electrodes are shown in this implementation as an example. Polarity routing switch network800is configured to either individually connect each output to one of input1or input2, or disconnect the output from either of the inputs. This selects the polarity for each individual electrode of electrodes254as one of: neutral (off), cathode (negative), or anode (positive). Each output is coupled to a corresponding three-state switch830for setting the connection state of the output. Each three-state switch is controlled by one or more of the bits from the selection input850. In some implementations, selection input850may allocate more than one bits to each three-state switch. For example, two bits may encode the three-state information. Thus, the state of each output of polarity routing switch device800can be controlled by information encoding the bits stored in the register732, which may be set by polarity assignment information received from the remote RF pulse generator module106, as described further below. Returning toFIG.7, power and impedance sensing circuitry may be used to determine the power delivered to the tissue and the impedance of the tissue. For example, a sensing resistor718may be placed in serial connection with the anodic branch714. Current sensing circuit719senses the current across the resistor718and voltage sensing circuit720senses the voltage across the resistor. The measured current and voltage may correspond to the actual current and voltage applied by the electrodes to the tissue. As described below, the measured current and voltage may be provided as feedback information to RF pulse generator module106. The power delivered to the tissue may be determined by integrating the product of the measured current and voltage over the duration of the waveform being delivered to electrodes254. Similarly, the impedance of the tissue may be determined based on the measured voltage being applied to the electrodes and the current being applied to the tissue. Alternative circuitry (not shown) may also be used in lieu of the sensing resistor718, depending on implementation of the feature and whether both impedance and power feedback are measured individually, or combined. The measurements from the current sensing circuitry719and the voltage sensing circuitry720may be routed to a voltage controlled oscillator (VCO)733or equivalent circuitry capable of converting from an analog signal source to a carrier signal for modulation. VCO733can generate a digital signal with a carrier frequency. The carrier frequency may vary based on analog measurements such as, for example, a voltage, a differential of a voltage and a power, etc. VCO733may also use amplitude modulation or phase shift keying to modulate the feedback information at the carrier frequency. The VCO or the equivalent circuit may be generally referred to as an analog controlled carrier modulator. The modulator may transmit information encoding the sensed current or voltage back to RF pulse generator106. Antenna725may transmit the modulated signal, for example, in the GHz frequency range, back to the RF pulse generator module106. In some embodiments, antennas705and725may be the same physical antenna. In other embodiments, antennas705and725may be separate physical antennas. In the embodiments of separate antennas, antenna725may operate at a resonance frequency that is higher than the resonance frequency of antenna705to send stimulation feedback to RF pulse generator module106. In some embodiments, antenna725may also operate at the higher resonance frequency to receive data encoding the polarity assignment information from RF pulse generator module106. Antenna725may include a telemetry antenna725which may route received data, such as polarity assignment information, to the stimulation feedback circuit730. The encoded polarity assignment information may be on a band in the GHz range. The received data may be demodulated by demodulation circuitry731and then stored in the register file732. The register file732may be a volatile memory. Register file732may be an 8-channel memory bank that can store, for example, several bits of data for each channel to be assigned a polarity. Some embodiments may have no register file, while some embodiments may have a register file up to 64 bits in size. The information encoded by these bits may be sent as the polarity selection signal to polarity routing switch network721, as indicated by arrow734. The bits may encode the polarity assignment for each output of the polarity routing switch network as one of: + (positive), − (negative), or 0 (neutral). Each output connects to one electrode and the channel setting determines whether the electrode will be set as an anode (positive), cathode (negative), or off (neutral). Returning to power management circuitry710, in some embodiments, approximately 90% of the energy received is routed to the stimulation circuitry711and less than 10% of the energy received is routed to the logic power circuitry713. Logic power circuitry713may power the control components for polarity and telemetry. In some implementations, the power circuitry713, however, does not provide the actual power to the electrodes for stimulating the tissues. In certain embodiments, the energy leaving the logic power circuitry713is sent to a capacitor circuit716to store a certain amount of readily available energy. The voltage of the stored charge in the capacitor circuit716may be denoted as Vdc. Subsequently, this stored energy is used to power a power-on reset circuit716configured to send a reset signal on a power-on event. If the wireless implantable neural stimulator700loses power for a certain period of time, for example, in the range from about 1 millisecond to over 10 milliseconds, the contents in the register file732and polarity setting on polarity routing switch network721may be zeroed. The implantable wireless stimulation device700may lose power, for example, when it becomes less aligned with RF pulse generator module106. Using this stored energy, power-on reset circuit740may provide a reset signal as indicated by arrow717. This reset signal may cause stimulation feedback circuit730to notify RF pulse generator module106of the loss of power. For example, stimulation feedback circuit730may transmit a telemetry feedback signal to RF pulse generator module106as a status notification of the power outage. This telemetry feedback signal may be transmitted in response to the reset signal and immediately after power is back on wireless stimulation device700. RF pulse generator module106may then transmit one or more telemetry packets to implantable wireless stimulation device. The telemetry packets contain polarity assignment information, which may be saved to register file732and may be sent to polarity routing switch network721. Thus, polarity assignment information in register file732may be recovered from telemetry packets transmitted by RF pulse generator module106and the polarity assignment for each output of polarity routing switch network721may be updated accordingly based on the polarity assignment information. The telemetry antenna725may transmit the telemetry feedback signal back to RF pulse generator module106at a frequency higher than the characteristic frequency of an RX antenna705. In one implementation, the telemetry antenna725can have a heightened resonance frequency that is the second harmonic of the characteristic frequency of RX antenna705. For example, the second harmonic may be utilized to transmit power feedback information regarding an estimate of the amount of power being received by the electrodes. The feedback information may then be used by the RF pulse generator in determining any adjustment of the power level to be transmitted by the RF pulse generator106. In a similar manner, the second harmonic energy can be used to detect the tissue depth. The second harmonic transmission can be detected by an external antenna, for example, on RF pulse generator module106that is tuned to the second harmonic. As a general matter, power management circuitry710may contain rectifying circuits that are non-linear device capable of generating harmonic energies from input signal. Harvesting such harmonic energy for transmitting telemetry feedback signal could improve the efficiency of implantable wireless stimulation device700. FIG.9Ais a diagram of an example implementation of a microwave field stimulator (MFS)902as part of a stimulation system utilizing an implantable wireless stimulation device922. In this example, the MFS902is external to a patient's body and may be placed within in close proximity, for example, within 3 feet, to an implantable wireless stimulation device922. The RF pulse generator module106may be one example implementation of MFS902. MFS902may be generally known as a controller module. The implantable wireless stimulation device922is a passive device. The implantable wireless stimulation device922does not have its own independent power source, rather it receives power for its operation from transmission signals emitted from a TX antenna powered by the MFS902, as discussed above. In certain embodiments, the MFS902may communicate with a programmer912. The programmer912may be a mobile computing device, such as, for example, a laptop, a smart phone, a tablet, etc. The communication may be wired, using for example, a USB or firewire cable. The communication may also be wireless, utilizing for example, a bluetooth protocol implemented by a transmitting blue tooth module904, which communicates with the host bluetooth module914within the programmer912. The MFS902may additionally communicate with wireless stimulation device922by transmitting a transmission signal through a Tx antenna907coupled to an amplifier906. The transmission signal may propagate through skin and underlying tissues to arrive at the Rx antenna923of the wireless stimulation device922. In some implementations, the wireless stimulation device922may transmit a telemetry feedback signal back to microwave field stimulator902. The microwave field stimulator902may include a microcontroller908configured to manage the communication with a programmer912and generate an output signal. The output signal may be used by the modulator909to modulate a RF carrier signal. The frequency of the carrier signal may be in the microwave range, for example, from about 300 MHz to about 8 GHz, preferably from about 800 MHz to 1.3 GHz. The modulated RF carrier signal may be amplified by an amplifier906to provide the transmission signal for transmission to the wireless stimulation device922through a TX antenna907. FIG.9Bis a diagram of another example of an implementation of a microwave field stimulator902as part of a stimulation system utilizing a wireless stimulation device922. In this example, the microwave field stimulator902may be embedded in the body of the patient, for example, subcutaneously. The embedded microwave field stimulator902may receive power from a detached, remote wireless battery charger932. The power from the wireless battery charger932to the embedded microwave field stimulator902may be transmitted at a frequency in the MHz or GHz range. The microwave field stimulator902shall be embedded subcutaneously at a very shallow depth (e.g., less than 1 cm), and alternative coupling methods may be used to transfer energy from wireless battery charger932to the embedded MFS902in the most efficient manner as is well known in the art. In some embodiments, the microwave field stimulator902may be adapted for placement at the epidural layer of a spinal column, near or on the dura of the spinal column, in tissue in close proximity to the spinal column, in tissue located near a dorsal horn, in dorsal root ganglia, in one or more of the dorsal roots, in dorsal column fibers, or in peripheral nerve bundles leaving the dorsal column of the spine. In this embodiment, the microwave field stimulator902shall transmit power and parameter signals to a passive Tx antenna also embedded subcutaneously, which shall be coupled to the RX antenna within the wireless stimulation device922. The power required in this embodiment is substantially lower since the TX antenna and the RX antenna are already in body tissue and there is no requirement to transmit the signal through the skin. FIG.10is a detailed diagram of an example microwave field stimulator902. A microwave field stimulator902may include a microcontroller908, a telemetry feedback module1002, and a power management module1004. The microwave field stimulator902has a two-way communication schema with a programmer912, as well as with a communication or telemetry antenna1006. The microwave field stimulator902sends output power and data signals through a TX antenna1008. The microcontroller908may include a storage device1014, a bluetooth interface1013, a USB interface1012, a power interface1011, an analog-to-digital converter (ADC)1016, and a digital to analog converter (DAC)1015. Implementations of a storage device1014may include non-volatile memory, such as, for example, static electrically erasable programmable read-only memory (SEEPROM) or NAND flash memory. A storage device1014may store waveform parameter information for the microcontroller908to synthesize the output signal used by modulator909. The stimulation waveform may include multiple pulses. The waveform parameter information may include the shape, duration, amplitude of each pulse, as well as pulse repetition frequency. A storage device1014may additionally store polarity assignment information for each electrode of the wireless stimulation device922. The Bluetooth interface1013and USB interface1012respectively interact with either the bluetooth module1004or the USB module to communicate with the programmer912. The communication antenna1006and a TX antenna1008may, for example, be configured in a variety of sizes and form factors, including, but not limited to a patch antenna, a slot antenna, or a dipole antenna. The TX antenna1008may be adapted to transmit a transmission signal, in addition to power, to the implantable, passive neural stimulator922. As discussed above, an output signal generated by the microcontroller908may be used by the modulator909to provide the instructions for creation of a modulated RF carrier signal. The RF carrier signal may be amplified by amplifier906to generate the transmission signal. A directional coupler1009may be utilized to provide two-way coupling so that both the forward power of the transmission signal flow transmitted by the TX antenna1008and the reverse power of the reflected transmission may be picked up by power detector1022of telemetry feedback module1002. In some implementations, a separate communication antenna1006may function as the receive antenna for receiving telemetry feedback signal from the wireless stimulation device922. In some configurations, the communication antenna may operate at a higher frequency band than the TX antenna1008. For example, the communication antenna1006may have a characteristic frequency that is a second harmonic of the characteristic frequency of TX antenna1008, as discussed above. In some embodiments, the microwave field stimulator902may additionally include a telemetry feedback module902. In some implementations, the telemetry feedback module1002may be coupled directly to communication antenna1006to receive telemetry feedback signals. The power detector1022may provide a reading of both the forward power of the transmission signal and a reverse power of a portion of the transmission signal that is reflected during transmission. The telemetry signal, forward power reading, and reverse power reading may be amplified by low noise amplifier (LNA)1024for further processing. For example, the telemetry module902may be configured to process the telemetry feedback signal by demodulating the telemetry feedback signal to extract the encoded information. Such encoded information may include, for example, a status of the wireless stimulation device922and one or more electrical parameters associated with a particular channel (electrode) of the wireless stimulation device922. Based on the decoded information, the telemetry feedback module1002may be used to calculate a desired operational characteristic for the wireless stimulation device922. Some embodiments of the MFS902may further include a power management module1004. A power management module1004may manage various power sources for the MFS902. Example power sources include, but are not limited to, lithium-ion or lithium polymer batteries. The power management module1004may provide several operational modes to save battery power. Example operation modes may include, but are not limited to, a regular mode, a low power mode, a sleep mode, a deep sleep/hibernate mode, and an off mode. The regular mode provides regulation of the transmission of transmission signals and stimulus to the wireless stimulation device922. In regular mode, the telemetry feedback signal is received and processed to monitor the stimuli as normal. Low-power mode also provides regulation of the transmission of transmission signals and stimulus to the electrodes of the wireless stimulation device. However, under this mode, the telemetry feedback signal may be ignored. More specifically, the telemetry feedback signal encoding the stimulus power may be ignored, thereby saving MFS902overall power consumption. Under sleep mode, the transceiver and amplifier906are turned off, while the microcontroller is kept on with the last saved state in its memory. Under the deep sleep/hibernate mode, the transceiver and amplifier906are turned off, while the microcontroller is in power down mode, but power regulators are on. Under the off mode, all transceiver, microcontroller and regulators are turned off achieving zero quiescent power. FIG.11is a flowchart showing an example process in which the microwave field stimulator902transmits polarity setting information to the wireless stimulation device922. Polarity assignment information is stored in a non-volatile memory1102within the microcontroller908of the MFS902. The polarity assignment information may be representative-specific and may be chosen to meet the specific need of a particular patient. Based on the polarity assignment information chosen for a particular patient, the microcontroller908executes a specific routine for assigning polarity to each electrode of the electrode array. The particular patient has a wireless stimulation device as described above. In some implementations, the polarity assignment procedure includes sending a signal to the wireless stimulation device with an initial power-on portion followed by a configuration portion that encodes the polarity assignments. The power-on portion may, for example, simply include the RF carrier signal. The initial power-on portion has a duration that is sufficient to power-on the wireless stimulation device and allow the device to reset into a configuration mode. Once in the configuration mode, the device reads the encoded information in the configuration portion and sets the polarity of the electrodes as indicated by the encoded information. Thus, in some implementations, the microcontroller908turns on the modulator909so that the unmodulated RF carrier is sent to the wireless stimulation device1104. After a set duration, the microcontroller908automatically initiates transmitting information encoding the polarity assignment. In this scenario, the microcontroller908transmits the polarity settings in the absence of handshake signals from the wireless stimulation device. Because the microwave field stimulator902is operating in close proximity to wireless stimulation device922, signal degradation may not be severe enough to warrant the use of handshake signals to improve quality of communication. To transmit the polarity information, the microcontroller908reads the polarity assignment information from the non-volatile memory and generates a digital signal encoding the polarity information1106. The digital signal encoding the polarity information may be converted to an analog signal, for example, by a digital-to-analog (DAC) converter1112. The analog signal encoding the waveform may modulate a carrier signal at modulator909to generate a configuration portion of the transmission signal (1114). This configuration portion of the transmission signal may be amplified by the power amplifier906to generate the signal to be transmitted by antenna907(1116). Thereafter, the configuration portion of the transmission signal is transmitted to the wireless stimulation device922(1118). Once the configuration portion is transmitted to the wireless stimulation device, the microcontroller908initiates the stimulation portion of the transmission signal. Similar to the configuration portion, the microcontroller908generates a digital signal that encodes the stimulation waveform. The digital signal is converted to an analog signal using the DAC. The analog signal is then used to modulate a carrier signal at modulator909to generate a stimulation portion of the transmission signal. In other implementations, the microcontroller908initiates the polarity assignment protocol after the microcontroller908has recognized a power-on reset signal transmitted by the neural stimulator. The power-on reset signal may be extracted from a feedback signal received by microcontroller908from the wireless stimulation device922. The feedback signal may also be known as a handshake signal in that it alerts the microwave field stimulator902of the ready status of the wireless stimulation device922. In an example, the feedback signal may be demodulated and sampled to digital domain before the power-on reset signal is extracted in the digital domain. FIG.12is a flow chart showing an example of the process in which the microwave field stimulator902receives and processes the telemetry feedback signal to make adjustments to subsequent transmissions. In some implementations, the microcontroller908polls the telemetry feedback module1002(1212). The polling is to determine whether a telemetry feedback signal has been received (1214). The telemetry feedback signal may include information based on which the MFS902may ascertain the power consumption being utilized by the electrodes of the wireless stimulation device922. This information may also be used to determine the operational characteristics of the combination system of the MFS902and the wireless stimulation device922, as will be discussed in further detail in association withFIG.13. The information may also be logged by the microwave field stimulator902so that the response of the patient may be correlated with past treatments received over time. The correlation may reveal the patient's individual response to the treatments the patient has received up to date. If the microcontroller908determines that telemetry feedback module1002has not yet received telemetry feedback signal, microcontroller908may continue polling (1212). If the microcontroller908determines that telemetry feedback module1002has received telemetry feedback signal, the microcontroller908may extract the information contained in the telemetry feedback signal to perform calculations (1216). The extraction may be performed by demodulating the telemetry feedback signal and sampling the demodulated signal in the digital domain. The calculations may reveal operational characteristics of the wireless stimulation device922, including, for example, voltage or current levels associated with a particular electrode, power consumption of a particular electrode, and/or impedance of the tissue being stimulated through the electrodes. Thereafter, in certain embodiments, the microcontroller908may store information extracted from the telemetry signals as well as the calculation results (1218). The stored data may be provided to a user through the programmer upon request (1220). The user may be the patient, the doctor, or representatives from the manufacturer. The data may be stored in a non-volatile memory, such as, for example, NAND flash memory or EEPROM. In other embodiments, a power management schema may be triggered1222by the microcontroller (908). Under the power management schema, the microcontroller908may determine whether to adjust a parameter of subsequent transmissions (1224). The parameter may be amplitude or the stimulation waveform shape. In one implementation, the amplitude level may be adjusted based on a lookup table showing a relationship between the amplitude level and a corresponding power applied to the tissue through the electrodes. In one implementation, the waveform shape may be pre-distorted to compensate for a frequency response of the microwave field stimulator902and the wireless stimulation device922. The parameter may also be the carrier frequency of the transmission signal. For example, the carrier frequency of the transmission signal may be modified to provide fine-tuning that improves transmission efficiency. If an adjustment is made, the subsequently transmitted transmission signals are adjusted accordingly. If no adjustment is made, the microcontroller908may proceed back to polling the telemetry feedback module1002for telemetry feedback signal (1212). In other implementations, instead of polling the telemetry feedback module1002, the microcontroller908may wait for an interrupt request from telemetry feedback module1002. The interrupt may be a software interrupt, for example, through an exception handler of the application program. The interrupt may also be a hardware interrupt, for example, a hardware event and handled by an exception handler of the underlying operating system. FIG.13is a schematic of an example implementation of the power, signal and control flow for the wireless stimulation device922. A DC source1302obtains energy from the transmission signal received at the wireless stimulation device922during the initial power-on portion of the transmission signal while the RF power is ramping up. In one implementation, a rectifier may rectify the received power-on portion to generate the DC source1302and a capacitor1304may store a charge from the rectified signal during the initial portion. When the stored charge reaches a certain voltage (for example, one sufficient or close to sufficient to power operations of the wireless stimulation device922), the power-on reset circuit1306may be triggered to send a power-on reset signal to reset components of the neural stimulator. The power-on set signal may be sent to circuit1308to reset, for example, digital registers, digital switches, digital logic, or other digital components, such as transmit and receive logic1310. The digital components may also be associated with a control module1312. For example, a control module1312may include electrode control252, register file732, etc. The power-on reset may reset the digital logic so that the circuit1308begins operating from a known, initial state. In some implementations, the power-on reset signal may subsequently cause the FPGA circuit1308to transmit a power-on reset telemetry signal back to MFS902to indicate that the implantable wireless stimulation device922is ready to receive the configuration portion of the transmission signal that contains the polarity assignment information. For example, the control module1312may signal the RX/TX module1310to send the power-on reset telemetry signal to the telemetry antenna1332for transmission to MFS902. In other implementations, the power-on reset telemetry signal may not be provided. As discussed above, due to the proximity between MFS902and implantable, passive neural stimulator922, signal degradation due to propagation loss may not be severe enough to warrant implementations of handshake signals from the implantable, passive stimulator922in response to the transmission signal. In addition, the operational efficiency of implantable, passive neural stimulator922may be another factor that weighs against implementing handshake signals. Once the FPGA circuit1308has been reset to an initial state, the FPGA circuit1308transitions to a configuration mode configured to read polarity assignments encoded on the received transmission signal during the configuration state. In some implementations, the configuration portion of the transmission signal may arrive at the wireless stimulation device through the RX antenna1334. The transmission signal received may provide an AC source1314. The AC source1314may be at the carrier frequency of the transmission signal, for example, from about 300 MHz to about 8. Thereafter, the control module1312may read the polarity assignment information and set the polarity for each electrode through the analog mux control1316according to the polarity assignment information in the configuration portion of the received transmission signal. The electrode interface252may be one example of analog mux control1316, which may provide a channel to a respective electrode of the implantable wireless stimulation device922. Once the polarity for each electrode is set through the analog mux control1316, the implantable wireless stimulation device922is ready to receive the stimulation waveforms. Some implementations may not employ a handshake signal to indicate the wireless stimulation device922is ready to receive the stimulation waveforms. Rather, the transmission signal may automatically transition from the configuration portion to the stimulation portion. In other implementations, the implantable wireless stimulation device922may provide a handshake signal to inform the MFS902that implantable wireless stimulation device922is ready to receive the stimulation portion of the transmission signal. The handshake signal, if implemented, may be provided by RX/TX module1310and transmitted by telemetry antenna1332. In some implementations, the stimulation portion of the transmission signal may also arrive at implantable wireless stimulation device through the RX antenna1334. The transmission signal received may provide an AC source1314. The AC source1314may be at the carrier frequency of the transmission signal, for example, from about 300 MHz to about 8 GHz. The stimulation portion may be rectified and conditioned in accordance with discussions above to provide an extracted stimulation waveform. The extracted stimulation waveform may be applied to each electrode of the implantable wireless stimulation device922. In some embodiments, the application of the stimulation waveform may be concurrent, i.e., applied to the electrodes all at once. As discussed above, the polarity of each electrode has already been set and the stimulation waveform has been applied to the electrodes in accordance with the polarity settings for the corresponding channel. In some implementations, each channel of analog mux control1316is connected to a corresponding electrode and may have a reference resistor placed serially. For example,FIG.13shows reference resistors1322,1324,1326, and1328in a serial connection with a matching channel. Analog mux control1316may additionally include a calibration resistor1320placed in a separate and grounded channel. The calibration resistor1320is in parallel with a given electrode on a particular channel. The reference resistors1322,1324,1326, and1328as well as the calibration resistor1320may also be known as sensing resistors718. These resistors may sense an electrical parameter in a given channel, as discussed below. In some configurations, an analog controlled carrier modulator may receive a differential voltage that is used to determine the carrier frequency that should be generated. The generated carrier frequency may be proportional to the differential voltage. An example analog controlled carrier modulator is VCO733. In one configuration, the carrier frequency may indicate an absolute voltage, measured in terms of the relative difference from a pre-determined and known voltage. For example, the differential voltage may be the difference between a voltage across a reference resistor connected to a channel under measurement and a standard voltage. The differential voltage may be the difference between a voltage across calibration resistor1320and the standard voltage. One example standard voltage may be the ground. In another configuration, the carrier frequency may reveal an impedance characteristic of a given channel. For example, the differential voltage may be the difference between the voltage at the electrode connected to the channel under measurement and a voltage across the reference resistor in series. Because of the serial connection, a comparison of the voltage across the reference resistor and the voltage at the electrode would indicate the impedance of the underlying tissue being stimulated relative to the impedance of the reference resistor. As the reference resistor's impedance is known, the impedance of the underlying tissue being stimulated may be inferred based on the resulting carrier frequency. For example, the differential voltage may be the difference between a voltage at the calibration resistor and a voltage across the reference resistor. Because the calibration resistor is placed in parallel to a given channel, the voltage at the calibration is substantially the same as the voltage at the given channel. Because the reference resistor is in a serial connection with the given channel, the voltage at the reference resistor is a part of the voltage across the given channel. Thus, the difference between the voltage at the calibration resistor and the voltage across the reference resistor correspond to the voltage drop at the electrode. Hence, the voltage at the electrode may be inferred based on the voltage difference. In yet another configuration, the carrier frequency may provide a reading of a current. For example, if the voltage over reference resistor1322has been measured, as discussed above, the current going through reference resistor and the corresponding channel may be inferred by dividing the measured voltage by the impedance of reference resistor1322. Many variations may exist in accordance with the specifically disclosed examples above. The examples and their variations may sense one or more electrical parameters concurrently and may use the concurrently sensed electrical parameters to drive an analog controlled modulator device. The resulting carrier frequency varies with the differential of the concurrent measurements. The telemetry feedback signal may include a signal at the resulting carrier frequency. The MFS902may determine the carrier frequency variation by demodulating at a fixed frequency and measure phase shift accumulation caused by the carrier frequency variation. Generally, a few cycles of RF waves at the resulting carrier frequency may be sufficient to resolve the underlying carrier frequency variation. The determined variation may indicate an operation characteristic of the implantable wireless stimulation device922. The operation characteristics may include an impedance, a power, a voltage, a current, etc. The operation characteristics may be associated with an individual channel. Therefore, the sensing and carrier frequency modulation may be channel specific and applied to one channel at a given time. Consequently, the telemetry feedback signal may be time shared by the various channels of the implantable wireless stimulation device922. In one configuration, the analog MUX1318may be used by the controller module1312to select a particular channel in a time-sharing scheme. The sensed information for the particular channel, for example, in the form of a carrier frequency modulation, may be routed to RX/TX module1310. Thereafter, RX/TX module1310transmits, through the telemetry antenna1332, to the MFS902, the telemetry feedback encoding the sensed information for the particular channel. FIG.14Ais a diagram of an example of a system for stimulating an excitable tissue using multiple electrode arrays. The system includes an external controller1402and an implantable wireless stimulation device1400. External controller1402may include a user interface and one or more antennas. In one configuration, the one or more antennas may transmit one or more input signals to the implantable device1400with neither cable connections nor inductive coupling. For instance, the input signals may be transmitted via electrical radiative coupling to antenna(s) on the implantable device1400. The input signals may contain electrical energy to power the implantable device1400. The input signals may also contain polarity assignment information for the electrodes in electrode arrays1406A and1406B on the implantable device1400. Common portion1404may be a central stem that houses antenna(s) for receiving the input signal as well as the circuits for harvesting the electrical energy contained in the input signal received. The circuits may also generate, using the harvested electrical energy, excitation waveforms to deliver to electrode arrays1406A and1406B. As illustrated, the implantable device1400may include two branches of electrode arrays1406A and1406B connected to the common portion1404, with each array1406A and1406B including eight (8) electrodes. In this example, the excitation waveforms from common portion1404provide the current that drives each electrode on both branches. FIG.14Bis a diagram of an example of the implantable device1400implemented as a Y-joint receiver with two connectors integrally attached to electrode array. The implantable device1400includes a central stem1404and two branch stems1414A and1414B. Central stem1404includes a tip1418that may include a suturing feature for anchoring the central stem1404to tissue. Central stem1404houses antenna traces1410A and1410B as well as circuit1408. In some examples, the antenna(s) on the implantable device can be positioned towards tip1418. Antenna traces1410A and1410B may each be radiatively coupled to an antenna for receiving input signals from external controller1402and/or for sending a telemetry signal to external controller1402. The input signal may contain electrical energy, excitation waveform parameter information, and polarity assignment information. The input signal may be received on a carrier signal having a frequency between about 800 KHz and 5.8 GHz. The electrical energy may power the entire implantable device1400. Circuit1408may include waveform conditioning circuitry to extract the electrical energy from the input signal to power the implantable device. The excitation waveform may include multiple excitation pulses. The waveform parameter information may include the shape, duration, amplitude of each pulse, as well as pulse repetition frequency. The waveform conditioning circuitry may additionally create electrical pulses as stimulus pulses based on the electrical energy and according to the excitation waveform parameter information. The stimulus pulses created may be at a frequency of about 5 to 20,000 Hz. The polarity assignment information refers to the polarity assigned to each electrode on a particular electrode array. The polarity assignment may be used to program the interfaces to set the corresponding electrodes on a particular electrode array. In the example Y-joint implantable device1400, branch stems1414A and1414B respectively houses the electrode arrays1406A and1406B. Branch stems1414A and1414B may converge at fork1411. Branch stems1414A and1414B may respectively include cables1412A and1412B, each respectively connecting circuit1408to the electrode arrays1406A and1406B. The cables may also be referred to as wires. In one example, cables1412A and1412B may be laser welded metal or alloy. For instance, cables1412A and1412B may include MP35N nickel cobalt alloy. The electrode arrays1406A and1406B each include eight electrodes. The electrode array1406A includes electrodes1406A-0to1406A-7. The electrode array1406B includes electrodes1406B-0to1406B-7. In one instance, each electrode may be wrapped circumferentially on the exterior wall of a branch stem. Branch stems1414A and1414B may extend respectively to tips1416A and1416B. Tips1416A and1416B may each include suturing features (not shown) for anchoring the respective electrode arrays to surrounding tissue. FIG.14Cis a block diagram of illustrating an example of the circuitry of the implantable device1400. An RX antenna705receives the input signal transmitted from external controller1402. The input signal may be received at RX antenna705via electrical radiative coupling. The RX antenna705may be embedded as a dipole, microstrip, folded dipole or other antenna configuration other than a coil configuration. The input signal contains electrical energy for powering the wireless implantable neural stimulator1400and for providing stimulation pulses to electrodes1408A0-7and1408B0-7. Antenna725may include a telemetry antenna to route received data, such as polarity assignment information, to the device interfaces1420A and1420B such that the polarity of electrodes on the implantable device can be programmed accordingly. In one example, the input signal received at antenna705is processed at RF interface1428A of controller1421. Electrical energy contained in the input signal may be extracted to power the implantable neural stimulator device1400. Stimulation pulses may be created based on the excitation waveform parameter information contained in the input signal. The created stimulus pulses can be routed to the device interfaces1420A and1420B to drive the respective eight electrodes connected thereto. Description box1422A shows the schematic for electrode array1406A. As illustrated, a capacitor bank1424A (with eight capacitors) is available for the electrode array of eight electrodes, namely1408A-0to1408A-7. A capacitor may provide power to the electrode connected thereto. Similarly, description box1422B shows the schematic for electrode array1406B, with capacitor bank1424B serving the electrode array of eight electrodes, namely1408B-0to1408B-7. In another example, polarity assignment information encoded in the input signal may be received at antenna725and processed at RF interface1428B of controller1421. The polarity assignment information may be decoded and used to program the device interfaces1420A and1420B so that the polarities of electrodes1408A-0to1408A-7and1408B-0to1408B-7can be set according to the polarity assignment information. Capacitor1422between Vcc switch1426and ground1422may store electrical energy for a power-on reset circuit. In case of a power-on event, the electric charges stored in capacitor1422may be used to reset the polarity assignment of each electrode and to reset register information on controller1421. FIG.15shows an example of an electrode assignment for the implantable device1400. In this example, the Y-joint implantable device includes two electrode arrays, namely, electrode array1406A and1406B. Each electrode array can include up to eight electrodes. However, more or less electrodes can be used for each array, or the form factor of the array may vary. As such, the array can be comprised of a cylindrical catheter type body with cylindrical electrodes spaced N distance apart, or may have a connector to a paddle or other flat, unidirectional device that contains N number of electrode pads arranged in various patterns to yield the desired effective treatment option for the stimulation of the tissue. The electrodes on the electrode arrays1406A and1406B are indexed according to the top mapping inFIG.15. In this mapping, the two electrode arrays are represented by an eight by two matrix where each row of the matrix represents one of the eight electrodes on one of the two “Y” electrode arrays. In this example, the right most electrode is mapped as electrode #0 while the left most electrode is mapped as electrode #7. The mapping in this example is linear. The polarity assignment for a particular electrode can be cathodic (+), anodic (−), or off. Specifically, each electrode can take on a polarity of either a source or sink, known as an anode or a cathode, or otherwise denoted as positive or negative. Further, each electrode of each array can be additionally set to an on or off state, to where the circuit is functionally open and the electrode is left in a neural electrical state. In this example, electrodes #7 and #6 of the electrode array1406A are assigned as cathodic while electrodes #7 and #6 of the electrode array1406B are assigned as anodic. Electrodes #3 to #5 of the electrode arrays1406A and1406B are assigned as off. Electrodes #1 and #2 of the electrode array1406A are assigned as anodic while electrodes #1 and #2 of the electrode array1406B are assigned as cathodic. The significance of programming polarity for each array on a particular electrode array will be explained in detail below. FIG.16shows an example of longitudinal currents formed between electrodes of an electrode array of the Y-joint receiver. A longitudinal current is a current that flows substantially parallel to the longitudinal axis of an electrode array. The current flows from (e.g., serving as a source) or exits at (e.g., serving as a sink) an electrode on the electrode array. In this illustration, electrodes #3 to #5 of the electrode arrays1406A are assigned as cathodic while electrode #4 of the electrode array1406is assigned as anodic. As illustrated, longitudinal currents1602A and1602B flow from electrode #5 to electrode #4. Current1602A is located above electrode array1406while current1602B is located below electrode array1406. When originating from electrode #5, the combined currents1602A and1602B may be measured at 1 mA. Longitudinal currents1604A and1604B may flow from electrode #2 to electrode #4. Current1604A is located above electrode array1406while current1604B is located below electrode array1406. When originating from electrode #3, the combined currents1602A and1602B may be measured at 1 mA. When currents1602A,1602B,1604A, and1604B converge at electrode #4, the combined currents may be measured at 2 mA. Currents1602A,1602B,1604A, and1604B can provide therapeutic relief to excitable tissue, such as neural tissue, on their paths. This type of electrode configuration can be used to activate neural tissue lateral of the midline. In some implementations, spatial distribution pattern of currents can be further enriched by the introduction of multiple electrode arrays.FIG.17Ashows an example of lateral currents formed between electrodes of two electrode arrays of the Y-joint receiver. A lateral current is a current that flows in a direction substantially transverse to a longitudinal axis of the electrode array. The current either originates from or exits at an electrode on the electrode array. In this illustration, electrode #3 of electrode array1406A is assigned as cathodic while electrode #3 of electrode array1406B is assigned as anodic. Currents1702A and1704B flow from electrode #3 of electrode array1406A to electrode #3 of electrode array1406B. Traversing the midline, currents1702A and1704B are located on mirrored paths. Originating at electrode #3 of electrode array1406A and ending at electrode #3 on electrode array1406B, the combined currents1702A and1702B are measured at 1 mA. Currents1702A and1704B can provide therapeutic relief to excitable tissue, such as neural tissue, on their paths. This electrode configuration can stimulate neural tissue closer to the midline with a horizontally oriented electrical field across the epidural space. FIG.17Bshows an example of a combination of lateral current field and longitudinal current field formed between electrodes of two electrode arrays of a Y-joint receiver. In this illustration, electrode #3 of electrode array1406A is assigned as anodic while electrode #5 of electrode array1406A and electrode #4 of electrode array1406B are assigned as cathodic. Longitudinal currents1704A and1704B flow from electrode #5 of electrode array1406A to electrode #3 of electrode array1406A. Meanwhile, lateral currents1706A and1706B flow from electrode #4 of electrode array1406B to electrode #3 of electrode array1406A. Longitudinal current1704A is located above electrode array1406A while longitudinal current1704B is located below electrode array1406A. When originating from electrode #5, the combined currents1702A and1702B may be measured at 1 mA. Traversing the midline, currents1706A and1706B are located on mirrored paths. When originating from electrode #4, the combined currents1706A and1706B may be measured at 1 mA. When currents1704A,1704B,1706A, and1706B converge at electrode #3 of electrode array1406, the combined currents may be measured at 2 mA. Currents1704A,1704B,1706A, and1706B can provide therapeutic relief to excitable tissue, such as neural tissue, on their paths. As disclosed herein, configurations of the electrodes' polarity can be set from the external controller1402to determine a particular electrodes combination to activate the tissue at a desired zone. In one example, the user interface on external controller1402to set the polarity and the power state for each electrode of the array can be in the form of a matrix interface. In this example, the matrix at the interface can be filled in by the operator for each of the N electrodes through a touch-screen. Once the matrix values are set, the operator initiates a data transfer to the central stem of the Y-joint implantable device. Circuits1408on board central stem1404may receive the data as the 8×2 matrix and store the data in self-contained memory. When electrical energy in the input signal has been harvested to power the Y-joint implant, the polarity to all those electrodes can be set according to this data in self-contained memory. In another example, the user interface on external controller1402may enable an operator to alter/modify the polarity setting for a particular electrode on a given array individually. In particular, the polarity setting of one electrode may be updated from the user interface on external controller1402without transmitting information concerning the polarity setting of other electrodes. In these examples, external controller1402is configured to transmit the input signal at least 12 cm, under an outer skin surface of the patient through tissue to the target site. FIG.17Cshows an example of stimulation zones formed by current fields between electrodes of two electrode arrays of the Y-joint receiver. Each stimulation zone is formed by virtue of electric field within the zone reaching an activation potential to cause neural activity. The electric field generated in-situ depends on electrical current as well as the impedance of the underlying tissue. The electrical current may include contributions from both longitudinal currents and lateral currents. A stimulation zone may also be known as a focal zone. Stimulation zones may be formed near an electrode, such as stimulation zones1708A,1708B and1708F. Stimulation zones may also be formed away from the electrodes, for example, near mid-line, as illustrated by stimulation zones1708C,1708D, and1708E. A longitudinal current may also be formed between two electrode arrays, as illustrated byFIG.17D. In this example, electrode arrays1406A and1406B are placed such that the distal ends are facing each other. Such placement may be achieved when the two electrode arrays form a loop, or when one electrode array is bent to tilt towards the other. In this demonstrative example, longitudinal currents1710G and1710H flow from electrode #2 on electrode array1406A to electrode #7 on electrode array1406B. Electrode #2 on electrode array1406A is assigned as cathodic while electrode #7 on electrode array1406B is assigned as anodic. Longitudinal current1710G flows on top of the electrode arrays while longitudinal current1710H flows underneath the electrode arrays. Originating at electrode #2 on electrode array1406A, the combined strength of longitudinal currents1710G and1710H may be measured at 1 mA. Exiting at electrode #7 on electrode array1406B, the combined strength of longitudinal currents1710G and1710H may be measured at 1 mA. For various stimulation therapies, two or more devices may be to be placed in the epidural space where the electrodes from each electrode array can generate an electric field from one contact on one electrode array to another contact on the other electrode array. Two general scenarios may be noteworthy for placing the electrode arrays of the disclosed Y-joint implantable device. In one scenario, the electrode arrays may be placed at the same spinal level, and they are separated laterally by a few millimeters. The electrode arrays are ideally offset from the physiological midline by the same distance.FIGS.17A to17Ccorrespond to this scenario. In contrast, in another scenario such as high-frequency sub-threshold stimulation, the two electrode arrays may be placed head to tail and aligned with the anatomical midline, where the combination of the two electrode arrays mimic a single long electrode array with twice the number of contacts.FIG.17Dcorresponds to this latter scenario. The implantation procedure for the Y-joint receiver disclosed herein may include the use of stylets or cannulas, as discussed below.FIG.18Ashows an example of an implantable device with a Y-joint receiver in which the stylet lumen for each electrode array exits at the central stem of the Y-joint receiver. As illustrated, stylet1802A is being placed into stylet lumen1804A at the central stem1404of implantable device1800. Stylet lumen1804A runs through central stem1404which also houses circuit1408, as discussed above. Stylet1804A extends into branch stem1414A and becomes stylet lumen1806A. Stylet lumen1806A runs through the branch stem1414A and exits at tip1416A. The branch stem1414A includes electrode array1406A with eight electrodes, namely1406A-0to1406A-7, as illustrated. Likewise, stylet1802B is being placed into stylet lumen1804B at the central stem1404of implantable device1800. Stylet lumen1804B runs through central stem1404and extends into branch stem1414B to become stylet lumen1806B. Stylet lumen1806B runs through the branch stem1414B and exits at tip1416B. This branch stem1414B includes an electrode array1406B with eight electrodes, namely1406B-0to1406B-7, as illustrated. FIG.18Bshows an example of an implantable device with a Y-joint receiver in which stylet lumens for each electrode array exit at the respective stem and before the central stem of the Y-joint receiver. In this example, the stylet lumens1806A and1806B exit the respective branch stems1414A and1414B before they reach central stem1404. As illustrated, stylet1802A is being placed into stylet lumen1806A which runs through branch stem1414A and exits at tip1416A. Similarly, stylet1802B is being placed into style lumen1806B which runs through branch stem1414B and exits at tip1416B. In the above examples, the inserted stylets may serve as guide wires to render the branch stems of the implantable device suitably rigid during implantation, such as, for example, through a needle device or an introducer device. Once an example implantable device1800has been placed in position, the stylets can be withdrawn from the stylet lumens. Thereafter, the implanted implantable device1800may be anchored to surrounding tissues, for example, by utilizing suturing features on tips1416A,1416B, and1418. In addition to the use of stylets, cannulas may be used during implantation of the implantable device disclosed herein.FIG.19Ashows an example of a large mouth cannula to fit both the electrode arrays of an implantable device. As illustrated, branch stems houses the electrode arrays1406A and1406B are inserted into big mouth cannula1900through opening1904on the proximal side. The electrode arrays1406A and1406B may be pushed through channel1902and then exit big mouth cannula1900through opening1906at the distal end. Once the electrodes on the electrode arrays1406A and1406B have been placed in proximity of an excitable tissue, such as a neural tissue, implantable device1400may be anchored to the surrounding tissue. In some instances, suturing features at tips1416A,1416B and1418may be utilized during the anchoring procedure. In these instances, prior to suturing, big mouth cannula1900may be withdrawn from the central stem of implantable device1400. FIG.19Bshows examples of two cannulas for the electrode arrays of an implantable device with a Y-joint receiver. As illustrated, branch stem1414A houses the electrode arrays1406A is inserted into peel-away cannula1900A through opening1906A on the proximal side. As disclosed herein, branch stem1414A also houses cable1412A that connects the electrode array1406A to a circuit1408on central stem1404. The electrode array1406A may be pushed through channel1902A and then exit peel-away cannula1900A through opening1906A at the distal end. Once the electrodes on the electrode array1406A has been placed in proximity of an excitable tissue, such as a neutral tissue, branch stem1414A may be anchored to the surrounding tissue. In some instances, suturing features at tip1416A may be utilized during the anchoring procedure. In these instances, prior to suturing, peel-away cannula1900A may be withdrawn from the branch stem1414A of implantable device1400. In one instance, peel-away cannula1900A may be torn apart and stripped off branch stem1414A. Likewise, branch stem1414B houses the electrode arrays1406B which may be inserted into peel-away cannula1900B through opening1906B on the proximal side. Branch stem1414B also houses cable1412B that connects the electrode array1406B to circuit1408on central stem1404. The electrode array1406B may be pushed through channel1902B to exit peel-away cannula1900A via opening1906B at the distal end. Once the electrodes on the electrode array1406B have been placed in proximity of an excitable tissue, such as a neutral tissue, branch stem1414B may be anchored to the surrounding tissue. In some instances, suturing features at tip1416B may be utilized during the anchoring procedure. In these instances, prior to suturing, peel-away cannula1900B may be withdrawn from the branch stem1414B of implantable device1400. In one instance, peel-away cannula1900B may be torn apart and stripped off branch stem1414B for the electrode arrays of an example implantable device according to some implementations. While using the example peel-way cannulas for an implantation procedure, the peel-away action may be subsequent to both branch stems being placed into proximity of the excitable tissue. In a similar vein, anchoring may take place when both branch stems have been stripped off the peel-away cannulas. In these instances, suturing features on tips1416A,1416B, and1418can be utilized for anchoring implantable device1400to surrounding tissues. A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. Accordingly, other implementations are within the scope of the following claims.
111,607
11857797
DETAILED DESCRIPTION I. Introduction The following disclosure describes a wireless power coil for a neuromodulation device that is to be implanted in a minimally invasive manner, for example, through a trocar or cannula. The basic principle of an inductively coupled power transfer system includes a transmitter coil and a receiver coil. Both coils form a system of magnetically coupled inductors. An alternating current in the transmitter coil generates a magnetic field which induces a voltage in the receiver coil. By attaching a load to the receiver coil the voltage can be used to power an electronic device or charge a battery. The magnetic field generated by the transmitter coil radiates (approximately equally) in all directions, hence the flux drops rapidly with distance (obeying an inverse square law). Consequently, the receiver coil must be placed as close as possible to the transmitter coil (less than 10 mm) to intercept the most flux. This requirement of a close proximity between the transmitter coil and the receiver coils is not always practical for neuromodulation therapy, especially instances in which the neurostimulator is implanted deeper than the subcutaneous layer (e.g., within the brain or thoracic cavity). Alternative, wireless charging systems have been developed that transfer power between a transmitter coil and a receiver coil that are operating at identical resonant frequencies (determined by the coils' distributed capacitance, resistance and inductance). The basic premise is that the energy “tunnels” from one coil to the other instead of radiating in all directions from the primary coil; and thus resonant wireless charging is not governed by the inverse square law. This technique is still “inductive” in that the oscillating magnetic field generated by the transmitter coil induces a current in the receiver coil and takes advantage of the strong coupling that occurs between resonant coils even when separated by tens of centimeters. Resonant wireless charging addresses the main drawbacks of inductive wireless charging, which is the requirement to closely couple the coils and the demand for precise alignment from the user. However, resonant wireless charging is not without its own drawbacks. A primary drawback is a relatively low efficiency due to flux leakage (even at close range a well-designed system might demonstrate an efficiency of 30% at 2 cm, dropping to 15% at 75 cm coil separation, greater circuit complexity and, because of the (typically) high operating frequencies, potential electromagnetic interference (EMI) challenges. The efficiency of the power transfer in resonant wireless charging depends on the energy coupling rate between the coils and the characteristic parameters for each coil (i.e., inductor). The amount of inductive coupling between coils is measured by their mutual inductance. The strength of the coupling may be expressed as a coupling factor, which is determined by the area of the coils including the distance between the coils, the ratio of width of the receiver coil/width of the transmitter coil, the shape of the coils and the angle between the coils. The characteristic parameters for each coil includes the resonance frequency and the intrinsic loss rate of the coils. A quality factor measures how well the system stores energy and is expressed as the ratio of the resonance frequency matching between the coils and the intrinsic loss rate of the coils. A higher quality factor indicates a lower rate of energy loss relative to the stored energy of the coils; the oscillations die out more slowly. Resonance allows the wireless power transfer system to operate at greater distances compared to a non-resonant one. However, frequency mismatch may be observed, which has the effect of limiting the maximum power stored and thus transferred. One factor that may influence the coupling factor and the quality factor of the coils is the external environment near the coils. In particular, the close proximity of an environmental factor such as metal or tissue has been found to greatly influence the efficiency of the wireless power transfer system. Most conventional wireless power transfer systems involve transferring power between a transmitting coil and a receiving coil in free space without nearby environmental factors. Consequently, the best possible efficiency of most conventional wireless power transmission systems depends on the coupling factor between the coils and the quality factors. However, for a low profile implanted device meant for subcutaneous and deeper applications and implanted via a minimally invasive manner, for example, through a trocar or cannula, the various components of the neurostimulator are packed into a tight volume of space. In a low profile implanted device, this means that the receiving coil will likely be placed next to a number of environmental factors including the metal enclosure, which has been found to influence the coupling (e.g., reduce the energy available to the receiving coil due to energy absorption and change of field shape) and the quality factor of the coils (e.g., create a frequency mismatch). To address these limitations and problems, it has been discovered that to improve efficiency of the wireless power transfer in a system with environmental factors it is important to maintain sufficient spacing between the coils and the environmental factors. Given a fixed area or volume for the delivery mechanism (e.g., trocar or cannula) of the implantable device and wireless power transfer coil, maximizing the coil area to maintain sufficient coupling and keeping enough spacing to avoid the influence from the environmental factors means that it is important to find a tradeoff between these requirements. One illustrative embodiment of the present disclosure is directed to a medical device that comprises a lossy housing surrounding a power supply; and a receiving coil configured to exchange power wirelessly via a wireless power transfer signal and deliver the power to the power supply. The receiving coil is spaced a predetermined distance from the lossy housing. The predetermined distance is determined based on: (i) a size constraint of a delivery mechanism for the medical device, (ii) a size of the lossy housing, (iii) an area of the receiving coil, and (iv) a coupling factor between the receiving coil and a transmitting coil of greater than 0.5. In other embodiments, a medical device is provided comprising: a housing; power supply within the housing and connected to an electronics module; and a receiving coil configured to exchange power wirelessly via a wireless power transfer signal and deliver the power to the power supply. The receiving coil is a helical structure comprising a first turn, a last turn, and one or more turns disposed between the first turn and the last turn. A width of the first turn is less than a width of the last turn. The one or more turns may have a sequential increase in width from the first turn to the last turn such that a shape of the receiving coil is a pyramid. In other embodiments, a wireless power transfer system is provided comprising a transmitting conductive structure configured to exchange power wirelessly via a wireless power transfer signal; and a receiving conductive structure integrated into a lossy environment comprising a lossy component. The receiving conductive structure is configured to exchange power wirelessly with the transmitting conductive structure via the wireless power transfer signal. The receiving conductive structure is spaced a predetermined distance from the lossy component. The predetermined distance is determined based on: (i) a size constraint of a delivery mechanism for the lossy environment, (ii) a size of the lossy component, (iii) an area of the receiving conductive structure, and (iv) a coupling factor between the receiving conductive structure and a transmitting conductive structure of greater than 0.5. In other embodiments, a medical device is provided comprising: a housing; power supply within the housing and connected to an electronics module; and a receiving coil configured to exchange power wirelessly via a wireless power transfer signal and deliver the power to the power supply. The receiving coil is a two-dimensional or planar structure comprising a one or more conductive traces formed on a substrate. The two-dimensional or planar structure is rolled up into a three-dimensional structure. In other embodiments, a neuromodulation system is provided comprising a transmitting conductive structure configured to exchange power wirelessly via a wireless power transfer signal; an implantable neurostimulator including: a lossy housing; a connector attached to a hole in the lossy housing; one or more feedthroughs that pass through the connector; an electronics module within the lossy housing and connected to the one or more feedthroughs; a power supply within the lossy housing and connected to the electronics module; and a receiving conductive structure disposed outside of the housing and connected to the power supply. The receiving conductive structure is configured to exchange power wirelessly with the transmitting conductive structure via the wireless power transfer signal and deliver the power to the power supply. The receiving conductive structure is spaced a predetermined distance from the lossy housing, and the predetermined distance is determined based on: (i) a size constraint of a delivery mechanism for the neuromodulation system, (ii) a size of the lossy housing, (iii) an area of the receiving conductive structure, and (iv) a coupling factor between the receiving conductive structure and a transmitting conductive structure of greater than 0.5. The neuromodulation system further comprises a lead assembly including: a lead body including a conductor material; a lead connector that connects the conductor material to the one or more feedthroughs; and one or more electrodes connected to the conductor material. Advantageously, these approaches provide a neuromodulation system, which has a very low thickness profile that is capable of being implanted in a minimally invasive manor, an efficient wireless power transfer, and greater design flexibility. More specifically, these approaches enable for spacing between the wireless power receiving coil and environmental factors presented by the neuromodulation system while also maximizing the area of the wireless power receiving coil in order to maximize the wireless power transfer into the implanted neurostimulator. II. Neuromodulation Devices and Systems with Wireless Power Transfer FIG.1shows a neuromodulation system100in accordance with some aspects of the present invention. In various embodiments, the neuromodulation system100includes an implantable neurostimulator105, a lead assembly110, and a transmitting conductive structure112(e.g., a transmitting coil). The implantable neurostimulator105may include a housing115, a connector120, a power source125, a receiving conductive structure130(e.g., a wireless power coil or a receiving coil), an antenna135, and an electronics module140(e.g., a computing system). The housing115may be comprised of materials that are biocompatible such as bioceramics or bioglasses for radio frequency transparency, or metals such as titanium or alloys thereof. In accordance with various aspects, the size and shape of the housing115is selected such that the neurostimulator105can be implanted within a patient. In the example shown inFIG.1, the connector120is attached to a hole in a surface of the housing115such that the housing115is hermetically sealed. The connector120may include one or more feedthroughs (i.e., electrically conductive elements, pins, wires, tabs, pads, etc.) mounted within a header and extending through the surface of the header from an interior to an exterior of the header. The power source125(e.g., a battery) may be within the housing115and connected (e.g., electrically connected) to the electronics module140to power and operate the components of the electronics module140. In some embodiments, the power source125and the electronics module140are surrounded by the housing115. The wireless power coil130may be outside the housing115and configured to receive electrical energy from the charging device112. In some embodiments, the wireless power coil130is attached to an outside surface of the housing115by a spacer142. The wireless power coil130is connected (e.g., electrically connected) to the power source125to provide the electrical energy to recharge or supply power to the power source125. The antenna135may be outside the housing115and connected (e.g., electrically connected) to the electronics module140for wireless communication with external devices via, for example, radiofrequency (RF) telemetry. In some embodiments, the electronics module140may be connected (e.g., electrically connected) to interior ends of the connector120such that the electronics module140is able to apply a signal or electrical current to conductive traces of the lead assembly110connected to exterior ends of the connector120. The electronics module140may include discrete and/or integrated electronic circuit components that implement analog and/or digital circuits capable of producing the functions attributed to the neuromodulation devices or systems such as applying or delivering neural stimulation to a patient. In various embodiments, the electronics module140may include software and/or electronic circuit components such as a pulse generator145that generates a signal to deliver a voltage, current, optical, or ultrasonic stimulation to a nerve or artery/nerve plexus via electrodes, a controller150that determines or senses electrical activity and physiological responses via the electrodes and sensors, controls stimulation parameters of the pulse generator145(e.g., control stimulation parameters based on feedback from the physiological responses), and/or causes delivery of the stimulation via the pulse generator145and electrodes, and a memory155with program instructions operable on by the pulse generator145and the controller150to perform one or more processes for applying or delivering neural stimulation. In various embodiments, the lead assembly110is a monolithic structure that includes a cable or lead body160. In some embodiments, the lead assembly110further includes one or more electrode assemblies165having one or more electrodes170, and optionally one or more sensors. In some embodiments, the lead assembly110further includes a lead connector175. In certain embodiments, the lead connector175is bonding material that bonds conductor material of the lead body160to the electronics module140of the implantable neurostimulator105via the connector120. The bonding material may be a conductive epoxy or a metallic solder or weld such as platinum. In other embodiments, the lead connector175is conductive wire, conductive traces, or bond pads (e.g., a wire, trace, or bond pads formed of a conductive material such as copper, silver, or gold) formed on a substrate and bonds a conductor of the lead body160to the electronics module140of the implantable neurostimulator105. In alternative embodiments, the implantable neurostimulator105and the lead body160are designed to connect with one another via a mechanical connector175such as a pin and sleeve connector, snap and lock connector, flexible printed circuit connectors, or other means known to those of ordinary skill in the art. The conductor material of the lead body160may be one or more conductive traces180formed on a supporting structure185. The one or more conductive traces180allow for electrical coupling of the electronics module140to the electrodes170and/or sensors of the electrode assemblies165. The supporting structure185may be formed with a dielectric material such as a polymer having suitable dielectric, flexibility and biocompatibility characteristics. Polyurethane, polycarbonate, silicone, polyethylene, fluoropolymer and/or other medical polymers, copolymers and combinations or blends may be used. The conductive material for the traces180may be any suitable conductor such as stainless steel, silver, copper or other conductive materials, which may have separate coatings or sheathing for anticorrosive, insulative and/or protective reasons. The electrode assemblies165may include the electrodes170and/or sensors fabricated using various shapes and patterns to create certain types of electrode assemblies (e.g., book electrodes, split cuff electrodes, spiral cuff electrodes, epidural electrodes, helical electrodes, probe electrodes, linear electrodes, neural probe, paddle electrodes, intraneural electrodes, etc.). In various embodiments, the electrode assemblies165include a base material that provides support for microelectronic structures including the electrodes170, a wiring layer, optional contacts, etc. In some embodiments, the base material is the supporting structure185. The wiring layer may be embedded within or located on a surface of the supporting structure185. The wiring layer may be used to electrically connect the electrodes170with the one or more conductive traces180directly or indirectly via a lead conductor. The term “directly”, as used herein, may be defined as being without something in between. The term “indirectly”, as used herein, may be defined as having something in between. In some embodiments, the electrodes170may make electrical contact with the wiring layer by using the contacts. III. Wireless Power Transfer System FIG.2shows a wireless power transfer system200comprising a transmitting device205and a receiving device210spaced apart from one another by a distance (D). In some embodiments, the transmitting device205is connected to a power supply215such a main power line. The transmitting device205is configured to convert input power (DC or AC electric current) from the power supply215into a wireless power transfer signal220. For example, the input power is converted into the wireless power transfer signal220by a first coupling device225. In some embodiments, the wireless power transfer signal220is a time varying electromagnetic field. The receiving device210is configured to receive the wireless power transfer signal220, convert the wireless power transfer signal220into an output power (AC or DC electric current), and deliver the output power to a load230(e.g., the power source125described with respect toFIG.1). For example, the wireless power transfer signal220is converted into the output power by a second coupling device235. Accordingly, the second coupling device235is configured to exchange power wirelessly with the first coupling device225via the wireless power transfer signal220. In some embodiments, the first coupling device225includes an optional oscillator240and a transmitting conductive structure245(e.g., a transmitting conductive structure112described with respect toFIG.1). In some embodiments, the transmitting conductive structure245is a transfer coil of wire configured to exchange power wirelessly via the wireless power transfer signal220. The oscillator240may be used to generate a high frequency AC current, which drives the transmitting conductive structure245to generate the wireless power transfer signal220such as the time varying or oscillating electromagnetic field. In some embodiments, the second coupling device235includes an optional rectifier250and a receiving conductive structure255(e.g., a receiving conductive structure130described with respect toFIG.1). In some embodiments, the receiving conductive structure255is a receiving coil of wire configured to exchange power wirelessly with the transmitting conductive structure245via the wireless power transfer signal220. The rectifier250may be used to convert the AC current induced at the receiving conductive structure255into DC current, which is delivered to the load235. In some embodiments, the transmitting conductive structure245and the receiving conductive structure255have a quality factor of greater than 50. In other embodiments, the transmitting conductive structure245and the receiving conductive structure255have a quality factor of greater than 100. In some embodiments, the first coupling device225further includes a resonant circuit260which includes: (i) the transmitting conductive structure245connected to a capacitor265, (ii) the transmitting conductive structure245being a self-resonant coil; or (iii) another resonator (not shown) with internal capacitance. In some embodiments, the second coupling device235further includes a resonant circuit270which includes: (i) the receiving conductive structure255connected to a capacitor275, (ii) the receiving conductive structure255being a self-resonant coil; or (iii) another resonator (not shown) with internal capacitance. The first coupling device225and the second coupling device235are tuned to resonate at a same resonant frequency. The resonance between the transmitting conductive structure245and the receiving conductive structure255may increase coupling and more efficient power transfer. In various embodiments, the receiving conductive structure255is in a lossy environment280. As used herein “lossy” means having or involving the dissipation of electrical or electromagnetic energy. In some embodiments, the lossy environment280includes one or more lossy environmental factors or components285, which result in current loss during the wireless power transfer between the transmitting conductive structure245and the receiving conductive structure255. In some embodiments, the lossy environment280is an implantable medical device such as a neurostimulator as described with respect toFIG.1. In some embodiments, the one or more lossy environmental factors or components285include body fluid, body tissue, a lossy component of the implantable medical device, or a combination thereof. In certain embodiments, the lossy component of the medical device is a housing comprised of metal. In some embodiments, the metal is titanium or an alloy thereof. IV. Wireless Power Coil FIGS.3A,3B, and3Cshow an implantable device300(e.g., the implantable neurostimulator105described with respect toFIG.1) comprising a receiving conductive structure305(e.g., the receiving conductive structure255described with respect toFIG.2) in accordance with aspects of the present disclosure. In various embodiments, a size of the implantable device300is constrained small enough such that the device can be implanted in a less complex and minimally invasive manner, for example, through a delivery mechanism310. In some embodiments, the delivery mechanism310is another medical device (a medical device different from the implantable device300) comprising a lumen defined by a size constraint315. The implantable device300may be implanted in a patient through the lumen of the delivery mechanism310. In some embodiments, the implantable device300has a size including: (i) a width (w) of less than 24 mm, for example from 10 mm to 20 mm, (ii) a height (h) of less than 15 mm, for example from 5 mm to 13 mm, and (iii) a length (1) of less than 80 mm, for example from 20 mm to 40 mm. In various embodiments, the receiving conductive structure305is physically configured to exchange power wirelessly via a wireless power transfer signal and deliver the power to the power supply. Physically configured means the receiving conductive structure305includes: (i) inductance and power receiving capability to meet the needs of the implantable device300including the ability to transfer power to the power source with at least an 8% overall efficiency; (ii) the mechanical dimensions (e.g., the height, width and length of the receiving conductive structure305) fit to the size constraint315of the delivery mechanism310for the implantable device300; (iii) the receiving conductive structure305is spaced apart from environmental factors to sufficiently avoid coupling of power to the environmental factors; and (iv) the receiving conductive structure305is biocompatible and a durable construction for the implanted environment. In some embodiments, the receiving conductive structure305is a receiving coil comprising wound wire. In certain embodiments, the wire is formed from a conductive material. The conductive material may be comprised of various metals or alloys thereof, for example, gold (Au), gold/chromium (Au/Cr), platinum (Pt), platinum/iridium (Pt/Ir), titanium (Ti), gold/titanium (Au/Ti), or any alloy thereof. In some embodiments, the coil has an inductance ranging from 0.5 uH to 50 uH or from 1 uH to 15 uH, for example about 1.2 uH. In some embodiments, the coil has a working frequency ranging from 1 mHz to 100 mHz or from 3 mHz to 50 mHz, for example about 27.12 mHz (ISM Standard Frequency). In some embodiments, the coil has a working voltage ranging from 5 V to 50 V or from 10 V to 35 V, for example about 25 V. In some embodiments, the wire of the coil has an American Wire Gauge (AWG) ranging from 25 AWG to 40 AWG or from 28 AWG to 37 AWG, for example 32 AWG. As used herein, the terms “substantially,” “approximately” and “about” are defined as being largely but not necessarily wholly what is specified (and include wholly what is specified) as understood by one of ordinary skill in the art. In any disclosed embodiment, the term “substantially,” “approximately,” or “about” may be substituted with “within [a percentage] of” what is specified, where the percentage includes 0.1, 1, 5, and 10 percent. FIGS.3A,3B, and3Cshow the implantable device300may further comprise a lossy housing320and optionally a connector325attached to an electronics module through a hole330in the lossy housing320(e.g., the housing115and connector120described with respect toFIG.1). In various embodiments, an epoxy covers at least a portion of the implantable device300in order to hold the components together and protect the components from environmental factors such as biological fluid. The epoxy may be a resin comprising one or more low molecular weight pre-polymers, one or more higher molecular weight polymers, or combinations thereof, which comprise at least two epoxide groups. In some embodiments, the epoxy covers substantially, if not entirely, the entire device300(e.g., the receiving conductive structure305, the lossy housing320, the connector325, and hole330are covered). In other embodiments, the epoxy covers select components of the device300but not all of the components (e.g., at least the receiving conductive structure305, the connector325, and the hole330are covered while the lossy housing is exposed). In some embodiments, the lossy housing320is comprised of materials that are biocompatible such as bioceramics or bioglasses for radio frequency transparency, or metals such as titanium or an alloy thereof. In some embodiments, the lossy housing320has a size including: (i) a width (w′) of less than 24 mm, for example from 10 mm to 20 mm, (ii) a height (h′) of less than 10 mm, for example from 5 mm to 9 mm, and (iii) a length (1′) of less than 80 mm, for example from 20 mm to 40 mm. As described herein, the lossy housing320may be an environmental factor that may influence performance of the receiving conductive structure305and thus the performance of the wireless power transfer system. In order to minimize the influence of the lossy housing320on the performance of the receiving conductive structure305, the receiving conductive structure305is spaced a predetermined distance (s) from the lossy housing320. However, the predetermined distance (s) is not boundless as in free space, and instead the predetermined distance (s) is bounded by one or more factors including the size of the implantable device300, the size of the lossy housing320, the size constraint310of the delivery mechanism315, an area335of the receiving conductive structure305, a requirement to minimize coupling of power from the receiving conductive structure305to the lossy housing320, and a requirement to limit a shift in the resonance frequency or decrease in the quality factor of the receiving conductive structure305. In some embodiments, the predetermined distance (s) is determined based on: (i) the size constraint315of the delivery mechanism310for the implantable device300, (ii) the size of the lossy housing320, (iii) the area335of the receiving conductive structure305, and (iv) a coupling factor between the receiving conductive structure305and the transmitting conductive structure of greater than 0.5. In some embodiments, the predetermined distance (s) is less than or equal to 5 mm, from 250 μm to 5 mm, from 250 μm to 20 mm, or from 500 μm to 15 mm, for example about 8 mm. As used herein, when an action or element is “triggered by” or “based on” something, this means the action or element is triggered or based at least in part on at least a part of the something. In some embodiments, the predetermined distance (s) provides a gap between the lossy housing320and the receiving conductive structure305on a vertical plane. In some embodiments, the predetermined distance (s) or gap between the receiving conductive structure305and the lossy housing320is maintained with a84 that is comprised of a medical grade polymer material. In certain embodiments, the84 fills in at least a portion of the gap to maintain the lossy housing320the predetermined distance (s) from the receiving conductive structure305. In some embodiments, the spacer or covering340surrounds the receiving conductive structure305and fills in at least a portion of the gap created by the predetermined distance (s) between the receiving conductive structure305and the lossy housing320. In other embodiments, the spacer or covering340is attached to one or more surfaces of the receiving conductive structure305and fills in at least a portion of the gap created by the predetermined distance (s) between the receiving conductive structure305and the lossy housing320. The medical grade polymer may be thermosetting or thermoplastic. For example, the medical grade polymer may be a soft polymer such as silicone, a polymer dispersion such as latex, a chemical vapor deposited poly(p-xylylene) polymer such as parylene, or a polyurethane such as Bionate® Thermoplastic Polycarbonate-urethane (PCU) or CarboSil® Thermoplastic Silicone-Polycarbonate-urethane (TSPCU). FIG.3Cshows that determining the predetermined distance (s) involves a tradeoff between increasing the predetermined distance (s), which minimizes coupling of power from the receiving conductive structure305to the lossy housing320, while maintaining a sufficient area335for the receiving conductive structure305in the size constraint310of the delivery mechanism315to ultimately achieve a coupling factor between the receiving conductive structure305and the transmitting conductive structure of greater than 0.5. The coupling factor is generally determined by the distance (D) between the receiving conductive structure305and the transmitting conductive structure and the area encompassed by the receiving conductive structure305and the transmitting conductive structure. For example, the greater the amount of the wireless power transfer signal (e.g., the greater the amount of flux from the magnetic field) that reaches the receiving conductive structure305, the better the conductive structures are coupled and the higher the coupling factor. The amount of the wireless power transfer signal that reaches the receiving conductor structure305may be increased by increasing the area335of the receiving conductor structure305. However, the coupling factor may be decreased by the presence of an environmental factor such as the housing320, which may couple with the receiving conductive structure305and leach power that is being transferred to the receiving conductive structure305. As shown inFIG.3C, the implantable device300has a size configured to fit within the size constraint315of the delivery mechanism310. In some embodiments, the size of the implantable device300includes: (i) a width (w) of less than 24 mm, for example from 5 mm to 15 mm or about 6 mm, (ii) a height (h) of less than 15 mm, for example from 5 mm to 13 mm, and (iii) a length (1) of less than 80 mm, for example from 20 mm to 40 mm or about 35 mm. In certain embodiments, the size of the implantable device300includes a width (w) of less than 24 mm, a height (h) of less than 15 mm, and a length (1) of less than 80 mm. In some embodiments, the size of the lossy housing320includes: (i) a width (w′) of less than 24 mm, for example from 10 mm to 20 mm, (ii) a height (h′) of less than 10 mm, for example from 5 mm to 9 mm, and (iii) a length (1′) of less than 80 mm, for example from 20 mm to 40 mm. In certain embodiments, the size of the lossy housing320includes a width (w′) of less than 24 mm, a height (h′) of less than 10 mm, and a length (l′) of less than 80 mm. In some embodiments, the size constraint315of the delivery mechanism310includes: (i) a width (w″) of less than 30 mm, for example from 10 mm to 20 mm, (ii) a height (h″) of less than 30 mm, for example from 10 mm to 20 mm, and (iii) a length (l″) of less than 250 mm, for example from 40 mm to 100 mm. In certain embodiments, the size constraint310includes a width of less than 30 mm, a height of less than 30 mm, and a length of less than 250 mm. In various embodiments, the delivery mechanism320is a laparoscopic port. A laparoscopic port for a minimally invasive procedure such as implantation of the device300may be exemplified as a cannula device or a trocar. Trocars typically comprise an outer housing and seal assembly, a sleeve with a lumen that fits inside the housing and seal assembly and a piercing stylus (e.g., an obturator) which slots into the lumen such that the tip of the stylus protrudes from the lower end of the device. The stylus may be used to create an opening in the abdominal wall through which the sleeve is inserted and fixed into place, following which the stylus is removed through an opening in the upper end of the device to allow insertion of a laparoscope or other surgical tools, or the device300in accordance with various aspects disclosed herein, through the lumen. A wide range of laparoscopic cannula devices and trocars exist having a variety of lengths and diameters. In some embodiments, the sleeve of the delivery mechanism320defines the size constraint315(e.g., the area of the lumen) of the delivery mechanism320. In some embodiments, the size constraint315has a circular cross-section A-A, as shown inFIG.3C. In certain embodiments, the size constraint315comprises a diameter (d) (width (w″)=height (h″)) of less than 30 mm, for example from 10 mm to 20 mm. While the circular cross-section of the size constraint315is described herein in particular detail with respect to several described embodiments, it should be understood that other shapes or cross-sections of the size constraint315have been contemplated without departing from the spirit and scope of the present invention. For example, the size constraint315may have an oval, rounded rectangle, semi-rectangular, obround, or semi obround shape or cross-section. As used herein, the term “semi-rectangular” or “semi-rectangular cross section” means a rounded rectangular portion overlaid onto a larger central circular portion, as shown inFIG.3D. As used herein, the term “rounded rectangle” or “rounded rectangular portion” means a shape obtained by taking the convex surface of four equal circles of radius r and placing their centers at the four corners of a rectangle with side lengths a and b and creating a perimeter p around the surface of the four equal circles and the rectangle, where the perimeter p of the shape is equal to 2(a+b+πr), as shown inFIG.3E. As used herein, the term “semi-obround” or “semi-obround cross section” means an obround portion overlaid onto a larger central circular portion, as shown inFIG.3F. As shown inFIG.3C, the receiving conductor structure305has area335defined by (ww)×(hh)×(ll). In some embodiments, the area335of the receiving conductor structure305is determined based on: (i) the size constraint315of the delivery mechanism320, (ii) the size of the lossy housing325, and (iii) the coupling factor between the receiving conductor structure305and the transmitting conductor structure of greater than 0.5. In some embodiments, the width (ww) is determined based on: (i) the width (w″) or the diameter (d) of the size constraint315. In some embodiments, the length (ll) is determined based on: (i) a length (l″) of the size constraint315. In some embodiments, the height (hh) is determined based on: (i) the height (h″) or the diameter (d) of the size constraint315, (ii) the height (h″) of the lossy housing, and (iii) the predetermined distance (s). In order to increase the maximum possible area340of the receiving conductor structure305to maintain the coupling factor between the receiving conductor structure305and the transmitting conductor structure of greater than 0.5 while also accommodating for the predetermined distance (s), the height (hh) of the receiving conductor structure305may be adjusted in a vertical direction, the width (ww) of the receiving conductor structure305may be adjusted in a horizontally direction, and the (ll) may also be adjusted in a horizontally direction. As shown inFIG.4A, in order to increase the maximum possible area of the receiving conductor structure400(e.g., the receiving conductor structure305described with respect toFIGS.3A,3B, and3C), the receiving conductor structure400may be formed in a three-dimensional manner rather than the conventional two-dimensional or planar coil. Testing has revealed that a three-dimensional coil is capable of maintaining sufficient coupling (i.e., the coupling factor between the receiving conductor structure400and the transmitting conductor structure of greater than 0.5) and power transfer with the transmitting conductor structure in such an enlarged area. In some embodiments, the receiving conductor structure400is a three-dimensional spiral or helix. The helix includes characteristics designed to maximize the area of the receiving conductor structure400in view of: (i) the size constraint of the delivery mechanism, (ii) the size of the lossy housing, and (iii) the coupling factor between the receiving conductor structure400and the transmitting conductor structure of greater than 0.5. In some embodiments, the characteristics of the helix include a shape405, a number of turns410, a pitch415(rise of the helix for one turn), a helix angle420, a helix length425(a length of the coil), a total rise430of the helix (overall coil height (hh)), a width (ww), or combinations thereof. In some embodiments, the shape405of the coil is rounded rectangular. However, it should be understood that other shapes of the coil have been contemplated without departing from the spirit and scope of the present invention. For example, the shape of the coil may be square, rectangular, circular, obround, etc. In some embodiments, the helix has greater than 2 turns or from 4 to 30 turns or from 4 to 15 turns, for example 9 turns, and a pitch between each of the turns from 10 μm to 1 cm or from 250 μm to 2 mm, for example about 500 μm. In some embodiments, the pitch between turns is the same or different. In some embodiments, the helix angle is from 5° to 85°, from 5° to 45°, or from 7° to 25°, for example, about 20°. In some embodiments, the helix length is from 2 cm to 100 cm or 25 cm to 75 cm, e.g., about 50 cm, from a first end435to a second end440. In some embodiments, the total rise or overall coil height (hh) is less than 15 mm, for example from 5 mm to 13 mm. As shown inFIGS.4B,4C,4D, and4E, a width (ww) of each of the turns410may be adjusted based on the position of the receiving conductor structure400in the delivery mechanism445and the size constraint450of the delivery mechanism445. In various embodiments, a width (ww) of each of the turns410is less than or equal to a width of the lossy housing (e.g., the width (w″) or the diameter (d) of the size constraint315). In some embodiments, a width (ww′) of the first turn455is less than a width (ww″) of the last turn460in order to accommodate the curvature (i.e., the size constraint450) of the delivery mechanism445. In some embodiments, depending on the size constraint450of the delivery mechanism445and the predetermined distance (s), the turns465between the first turn455and the last turn460have a sequential increase in width (ww) from the first turn455such that a shape of the receiving conductor structure400is a pyramid (see, e.g.,FIGS.4B and4C). In other embodiments, the width (ww′) of the first turn455is the substantially the same as the width (ww″) of the last turn460in order to accommodate the curvature (i.e., the size constraint450) of the delivery mechanism445. In some embodiments, depending on the size constraint450of the delivery mechanism445and the predetermined distance (s), the turns465between the first turn455and the last turn460have a same, smaller, or larger width (ww) from that of the first turn455or the last turn460such that a shape of the receiving conductor structure is configured to fit within the size constraint450of the delivery mechanism445(see, e.g.,FIGS.4D and4E). In various embodiments, the number of turns410and the helix length425are increased to maximize the area occupied by the receiving conductive structure400. In some embodiments, the number of turns410and the helix length425are increased by adjusting the pitch415, the helix angle420, and the total rise430. In some embodiments, as shown inFIG.4F, the receiving conductor structure400is a helical structure with a total rise430or height that is determined based on: (i) a first pitch470between a first turn472and a second turn475of the receiving conductor structure400; (ii) a second pitch480between a last turn482and a second to last turn485of the receiving conductor structure400; and (iii) a third pitch490between remaining turns495between the second turn475and the second to last turn485. The total rise430or height may be determined further based on the size constraint450of the delivery mechanism445and a size of the implantable device497, in particular, the height of the implantable device497. For example, the total rise430or height of the receiving conductor structure400may be determined to be less than the difference of the diameter or height of the delivery mechanism445and the height of the implantable device497. In some embodiments, the first pitch470and the second pitch480are from 10 μm to 3 mm or from 250 μm to 2 mm, for example about 500 μm; and the third pitch490is from 500 μm to 1 cm or from 1 mm to 3 mm, for example about 2 mm. In some embodiments, the first pitch470and the second pitch480are less than the third pitch490. In some embodiments, the first pitch470is the same as the second pitch480. In other embodiments, the first pitch470is different from the second pitch480. Accordingly, by adjusting the width (ww) of each turn410and increasing the total rise430or height of the receiving conductive structure400it is possible to increase the number of turns410and the helix length425to maximize the area occupied by the receiving conductive structure400. The area occupied by the receiving conductive structure400is maximized while fitting the receiving conductive structure400within the sizing constraint450of the delivery mechanism445even with the predetermined distance (s) between the lossy housing498and the receiving conductive structure400. As shown inFIG.5A, in order to increase the maximum possible area of the receiving conductor structure500(e.g., the receiving conductor structure305described with respect toFIGS.3A,3B, and3C), the receiving conductor structure500may be formed as a two-dimensional or planar coil505. As shown inFIG.5B, the two-dimensional or planar coil505may be rolled up into a three-dimensional structure510. In various embodiments, the two-dimensional or planar coil505is rolled up into a three-dimensional structure510that is capable of fitting within the delivery mechanism515in view of: (i) the size constraint520of the delivery mechanism515and (ii) the size of the lossy housing525(see, e.g.,FIGS.5C and5D). In some embodiments, a size of the three-dimensional structure is determined based on: (i) a size constraint of the delivery mechanism515for the implantable device, (ii) a size of the lossy housing525, (iii) an area of the receiving conductor structure, and (iv) a coupling factor between the receiving conductor structure and a transmitting conductor structure of greater than 0.5. By rolling up the two-dimensional or planar coil505into the three-dimensional structure510it is possible to deliver the two-dimensional or planar coil505via the delivery mechanism515to an implant site. As shown inFIG.5E, once the implantable device530has been delivered to the implant site via the delivery mechanism515, the three-dimensional structure510is capable of being unfurled back into the two-dimensional or planar coil505. Testing has revealed that once the two-dimensional or planar coil is unfurled it is capable of maintaining sufficient coupling (i.e., the coupling factor between the receiving conductor structure500and the transmitting conductor structure of greater than 0.5) and power transfer with the transmitting conductor structure in such an enlarged area. In various embodiments, the receiving conductor structure500comprises a substrate535. In some embodiments, the substrate535is comprised of one or more layers of dielectric material (i.e., an insulator). The dielectric material may be selected from the group of electrically nonconductive materials consisting of organic or inorganic polymers, ceramics, glass, glass-ceramics, polyimide-epoxy, epoxy-fiberglass, and the like. In certain embodiments, the dielectric material is a polymer of imide monomers (i.e., a polyimide), a liquid crystal polymer (LCP) such as Kevlar®, parylene, polyether ether ketone (PEEK), or combinations thereof. In some embodiments, one or more conductive traces or wirings540are formed on a portion of the substrate535. As used herein, the term “formed on” refers to a structure or feature that is formed on a surface of another structure or feature, a structure or feature that is formed within another structure or feature, or a structure or feature that is formed both on and within another structure or feature. In various embodiments, the one or more conductive traces540are a plurality of traces, for example, two or more conductive traces or from two to twenty-four conductive traces. The plurality of conductive traces540are comprised of one or more layers of conductive material. The conductive material selected for the one or more conductive traces550should have good electrical conductivity and may include pure metals, metal alloys, combinations of metals and dielectrics, and the like. For example, the conductive material may be gold (Au), gold/chromium (Au/Cr), platinum (Pt), platinum/iridium (Pt/Ir), titanium (Ti), gold/titanium (Au/Ti), or any alloy thereof. The one or more conductive traces540may be deposited onto a surface of the substrate535by using thin film deposition techniques well known to those skilled in the art such as by sputter deposition, chemical vapor deposition, metal organic chemical vapor deposition, electroplating, electroless plating, and the like. In some embodiments, the thickness of the one or more conductive traces540is dependent on the particular inductance desired for receiving conductor structure500, in order to enlarge the area of the receiving conductor structure500. In certain embodiments, each of the one or more conductive traces540has a thickness from 0.5 μm to 100 μm or from 25 μm to 50 μm, for example about 25 μm or about 40 μm. In some embodiments, each of the one or more conductive traces225has a length (m) of about 5 cm to 200 cm or 50 cm to 150 cm, e.g., about 80 cm. In some embodiments, the conductive traces540are interconnected and connected to the implantable neurostimulator using one or more vias545or wiring layers formed within the substrate535. In various embodiments, the conductive traces540are formed with a predetermined shape to enlarge the area of the receiving conductor structure500. For example, the receiving conductor structure500may comprise the one or more conductive traces or wirings540formed on the substrate535in a spiral shape. The spiral shape545may include characteristics designed to maximize the area of the receiving conductor structure500that can be fabricated on the substrate535and fit within the size constraint520of a delivery mechanism515while also taking into consideration a size of the lossy housing525of an implantable device530. In some embodiments, the characteristics of the spiral shape include a predetermined number of turns550and a predetermined pitch555between each of the turns525to maximize the overall area obtainable for the receiving conductor structure500. In certain embodiments, the spiral shape has 2 or more turns550, for example from 2 to 25 turns, and a pitch555between each of the turns from 10 μm to 1 cm or from 250 μm to 2 mm, for example about 350 μm. Accordingly, the spiral shape can maximize the area of the receiving conductor structure500that can be fabricated from the substrate. While the invention has been described in detail, modifications within the spirit and scope of the invention will be readily apparent to the skilled artisan. It should be understood that aspects of the invention and portions of various embodiments and various features recited above and/or in the appended claims may be combined or interchanged either in whole or in part. In the foregoing descriptions of the various embodiments, those embodiments which refer to another embodiment may be appropriately combined with other embodiments as will be appreciated by the skilled artisan. Furthermore, the skilled artisan will appreciate that the foregoing description is by way of example only, and is not intended to limit the invention.
50,227
11857798
DETAILED DESCRIPTION In some examples, this disclosure describes example techniques related to controlling the sensing of cardiac electrical signals and/or the delivery of cardiac therapy (e.g., cardiac pacing or anti-tachyarrhythmia shocks) by a medical device system based on a current heart position state of the patient. In some examples, processing circuitry of a medical device or other processing circuitry of the medical device system, may determine a current heart position state of the patient of a plurality of heart position states stored in a memory of the medical device system and in association with a respective modification of at least one of a plurality of cardiac therapy and/or sensing parameters. According to some example techniques described herein, the processing circuitry also may modify the at least one cardiac therapy or sensing parameter value according to the modification associated with the current heart position state, and control the delivery of the cardiac therapy according to the modified at least one cardiac therapy parameter value. In some examples, the processing circuitry may modify an electrode vector that includes at least two of a plurality of electrodes of the medical device system, based on the current heart position state, and control the medical device system to at least sense a cardiac electrogram or deliver cardiac therapy via the modified electrode vector. Each of the heart position states may be associated with one or more postures of the patient in a memory of the medical device system. For example, a heart position state in which the heart is more caudal (e.g., relative to a baseline position) may be associated to a sitting, standing, or otherwise upright posture. Thus, the processing circuitry may determine the current heart position state of the patient by determining a current posture of the patient. Additionally, or alternatively, each of the heart position states may be associated with a respiratory state of the patient, such as at least one of an inhalation phase, a respiratory rate, or a respiratory depth of the patient. For example, a heart position state in which the heart is more caudal, relative to a baseline position, may be associated with one or more of an inhalation phase, an elevated respiratory rate, and/or an increased respiratory depth (e.g., relative to baseline or other threshold respiration values). In such examples, the processing circuitry may determine a respiratory state of the patient based on signals received by one or more sensors of the medical device system, as discussed below with respect toFIGS.1A-2. In some examples, the processing circuitry may further determine that the respiratory state of the patient is at least one of an inhalation phase, a respiratory depth satisfying a respiratory depth threshold, or a respiratory rate satisfying a respiratory rate threshold. In examples in which the processing circuitry determines the current heart position state based on both a posture and a respiratory state, the determined posture and the determined respiratory state, taken together, may be associated with a different heart position state than if taken separately. For example, when the patient is in an upright posture and breathing deeply, the processing circuitry may determine that the patient's heart position state is different (e.g., because the heart may be more caudal) than when the patient is upright but not breathing deeply or lying down and breathing deeply. To adapt cardiac therapy to the patient's heart position state, the processing circuitry may modify the at least one cardiac therapy parameter value according to a modification associated with the patient's current heart position state by modifying a tachyarrhythmia detection parameter. For example, the tachyarrhythmia detection parameter may be a threshold heart rate (e.g., a certain number of beats per minute over a baseline heart rate) that, if satisfied, may indicate a tachyarrhythmia. However, the patient may have a different (e.g., higher) baseline heart rate during inhalation than during exhalation. If the tachyarrhythmia detection threshold is not adjusted to account for this difference in baseline heart rate during inhalation and exhalation, a false-positive detection of tachyarrhythmia may occur during inhalation. Thus, the processing circuitry may modify the tachyarrhythmia detection threshold by increasing the threshold when the patient's heart position state corresponds to inhalation, which may improve an accuracy of tachyarrhythmia detection during inhalation and reduce a possibility of delivering unnecessary anti-tachyarrhythmia shocks, which may be uncomfortable for the patient or may unnecessarily deplete a power source of the medical device system. Other examples of tachyarrhythmia detection parameters that may be modified based on the heart position state include an amplitude threshold of the cardiac electrogram used to detect features, such as R-waves or P-waves, of the cardiac electrogram, or a cardiac electrogram morphology parameter, such as template used to distinguish treatable tachyarrhythmias from other tachyarrhythmias (e.g., supra-ventricular tacharrhythmias). In some examples, the processing circuitry may modify the at least one cardiac therapy parameter value according to the modification associated with the patient's current heart position state by modifying a cardiac electrogram sensing parameters, such as an amplitude threshold. For example, a baseline cardiac electrogram sensing amplitude threshold may be selected to enable sensing of a desired portion of a sensed cardiac electrogram (e.g., an R-wave), when one or more sensing electrodes are positioned approximately over a target portion of the heart such as a ventricle. However, when the patient's heart position is caudal to a baseline position, such as when the patient is upright and/or inhaling, the one or more other portions of the sensed electrogram (e.g., a T-wave) may be more prominent. In some such examples, oversensing of a T-wave may result in an increased possibility of a false-positive detection of tachyarrhythmia. Thus, it may be beneficial to increase a sensing threshold amplitude associated with such other portions of the cardiac electrogram to reduce a possibility of false-positive tachyarrhythmia detection. For example, the processing circuitry may modify the cardiac electrogram sensing amplitude threshold by increasing a cardiac electrogram sensing amplitude threshold when the patient's heart position is more caudal. In some examples, modifying the cardiac electrogram sensing threshold improve an accuracy of tachyarrhythmia detection during inhalation, which may reduce a possibility of delivering unnecessary anti-tachyarrhythmia shocks that may be uncomfortable for the patient or may unnecessarily deplete a power source of the medical device system. In some examples, the processing circuitry may modify the at least one cardiac therapy parameter value according to the modification associated with the patient's current heart position state by modifying an anti-tachyarrhythmia shock or pacing pulse magnitude (which may be a pulse amplitude, width, or energy). For example, a baseline anti-tachyarrhythmia shock or pacing pulse magnitude may be selected to effectively treat a tachyarrhythmia or maintain pacing capture when one or more defibrillation or pacing electrodes are positioned approximately over the heart. However, when the patient's heart position is caudal to a baseline position, such as when the patient is upright and/or inhaling, one or more of the defibrillation or pacing electrodes may no longer be positioned approximately over the heart. In some examples, such movement of the heart away from the electrodes may reduce an efficacy of an anti-tachyarrhythmia shock or may result in loss of pacing capture. In such examples, the processing circuitry may modify the cardiac therapy parameter by increasing an anti-tachyarrhythmia shock magnitude or increasing an amplitude of one or more pacing pulses when the patient's heart position state corresponds to one or more postures or respiratory states associated with a more caudal (e.g., relative to a baseline) heart position. By accounting for the movement of the patient's heart relative to the defibrillation or pacing electrodes, the medical device system may deliver effective anti-tachyarrhythmia shock therapy or maintain pacing capture even when the heart moves away from the electrodes. In some examples in which the processing circuitry may modify the at least one cardiac therapy parameter value according to a modification associated with a patient's current heart position state, the anti-tachyarrhythmia shock therapy parameters may be a sensing vector that includes at least two of a plurality of electrodes of a lead coupled to the medical device and a shock vector that includes at the least two of the plurality of electrodes. As discussed above, when the patient's heart position is caudal to a baseline position, one or more electrodes (e.g., sensing and/or defibrillation electrodes) positioned on a lead may no longer be approximately over the heart. In such examples, the processing circuitry may account for the position of the heart by removing one or more of the electrodes not positioned approximately over the heart from the electrode vector, as such electrodes may not be positioned to deliver sufficient energy to the heart or sense cardiac electrical signals when the heart is more caudal. For example, the one or more electrodes removed from the sensing vector or the shock vector may be one or more electrodes positioned on a distal portion of the lead. In some examples, an anti-tachyarrhythmia shock delivered by the medical device using such a modified shock vector may improve an efficacy of anti-tachyarrhythmia shock therapy, such as by increasing shock impedance and more efficiently directing and sustaining delivery of energy to the heart via the electrodes of the modified shock vector. In some techniques in which the processing circuitry may control the medical device system to at least sense a cardiac electrogram or deliver cardiac therapy via a modified electrode vector, the processing circuitry may control the medical device system to deliver cardiac therapy by controlling the medical device system to deliver cardiac pacing via the modified vector. As discussed above, when the patient's heart is positioned caudal to a baseline position one or more of electrodes positioned on a lead (e.g., sensing and/or pacing electrodes positioned on a distal portion of the lead) may no longer be positioned approximately over a portion of the heart to which it may be desirable to deliver pacing pulses, such as a ventricle. Instead, for example, one or more of the electrodes may be positioned over an atrium of the heart when the heart is position is caudal to a baseline position. Pacing pulses delivered via one or more electrodes positioned over a non-target portion of the heart may not contribute to pacing efficacy and may reduce energy efficiency of the medical device system. Thus, it may be desirable to remove such electrodes from a pacing vector when the heart is positioned more caudal to improve pacing efficiency. In any of the example techniques described herein, the modifications to the cardiac therapy or sensing parameters and/or electrode vectors may be selected by a clinician and programmed into a memory of the medical device system. Such modifications may be selected for an individual patient, as an amount and/or direction of that a patient's heart may move with changes in posture and/or respiration may vary between patients. Thus, the clinician may select the modifications based on a magnitude and/or direction of movement of a particular patient's heart with changes in patient posture and/or respiration. For example, patients with smaller hearts (e.g., younger and/or female patients) may have less heart movement than patients with larger hearts (e.g., older and/or male patients). In some examples, hearts of younger patients may pivot with changes in posture and/or respiration. Thus, the clinician may select the modifications based on one or more of a gender, age, or size (e.g., height and/or weight) of the patient. In some examples, the clinician may directly observe (e.g., via fluoroscopy) an amount and/or direction of movement of the patient's heart with changes in posture and/or respiration. In such examples, the medical device system may automatically or semi-automatically modify values of one or more cardiac therapy parameters, cardiac sensing parameters, or electrode vectors as the patient's heart moves with changes in patient posture and/or respiration. Thus, the techniques described herein may improve the efficacy and efficiency of the cardiac therapy delivery by more accurately directing energy to target portions of the heart and by reducing the delivery of energy to non-target locations. A medical device of a medical device system used in some of the example techniques may be an IMD configured for implantation within the patient, such as substernally or subcutaneously, and may be configured to sense cardiac electrical signals and deliver cardiac therapy via at least one electrode of the IMD. In other examples, a medical device of a medical device system used in some of the example techniques may be an external medical device (e.g., not configured for implantation within the patient) configured to sense cardiac electrical signals and deliver anti-tachyarrhythmia shocks via at least one electrode of the medical device. FIGS.1A-1Care conceptual diagrams of a medical device system10implanted within a patient8.FIG.1Ais a front view of medical device system10implanted within patient8.FIG.1Bis a side view of medical device system10implanted within patient8.FIG.1Cis a transverse view of medical device system10implanted within patient8. In some examples, the medical device system10is an extravascular implantable cardioverter-defibrillator (EV-ICD) system implanted within patient8. However, the techniques described herein may be applicable to other implanted and/or external cardiac systems, including cardiac pacemaker systems, cardiac resynchronization therapy defibrillator (CRT-D) systems, cardioverter systems, wearable automated external defibrillator (WAED) systems, or combinations thereof, as well as other stimulation and/or sensing systems, such as neurostimulation systems. In addition, system10may not be limited to treatment of a human patient. In alternative examples, system10may be implemented in non-human patients, such as primates, canines, equines, pigs, bovines, ovines, felines, or the like. These other animals may undergo clinical or research therapies that may benefit from the subject matter of this disclosure. IMD12is configured to be implanted in a patient, such as patient8. In some examples, IMD12is implanted subcutaneously or submuscularly on the left midaxillary of patient8, such that IMD12may be positioned on the left side of patient8above the ribcage. In some other examples, IMD12may be implanted at other subcutaneous locations on patient8such as at a pectoral location or abdominal location. IMD12includes housing20that may form a hermetic seal that protects components of IMD12. In some examples, housing20of IMD12may be formed of a conductive material, such as titanium, or of a combination of conductive and non-conductive materials, which may function as a housing electrode. IMD12may also include a connector assembly (also referred to as a connector block or header) that includes electrical feedthroughs through which electrical connections are made between lead22and electronic components included within the housing. Housing20may house one or more of processing circuitry, memories, transmitters, receivers, sensors, sensing circuitry, therapy circuitry, power sources and other appropriate components. In general, medical device systems (e.g., system10) may include one or more medical devices, leads, external devices, or other components configured to implement the techniques described herein. In the illustrated example, IMD12is connected to at least one implantable cardiac lead22. In other examples, two leads may be used. In some examples, IMD12may be configured to deliver high-energy anti-tachyarrhythmia (e.g., cardioversion or defibrillation) shocks to patient's heart18when a ventricular tachyarrhythmia, e.g., ventricular tachycardia (VT) or ventricular fibrillation (VF), is detected. Cardioversion shocks are typically delivered in synchrony with a detected R-wave when fibrillation detection criteria are met. Defibrillation shocks are typically delivered when fibrillation criteria are met, and the R-wave cannot be discerned from signals sensed by IMD12. Lead22includes an elongated lead body having a proximal end that includes a connector (not shown) configured to be connected to IMD12and a distal portion that includes electrodes32A,32B,34A, and34B. Lead22extends subcutaneously above the ribcage from IMD12toward a center of the torso of patient8. At a location near the center of the torso, lead22bends or turns and extends intrathoracically superior under/below sternum24. Lead22thus may be implanted at least partially in a substernal space, such as at a target site between the ribcage or sternum24and heart18. In one such configuration, a proximal portion of lead22may be configured to extend subcutaneously from IMD12toward sternum24and a distal portion of lead22may be configured to extend superior under or below sternum24in the anterior mediastinum26(FIG.1C). Lead22may include one or more curved sections as discussed herein to configure lead22to naturally (e.g., in a self-biasing manner) extend in this way upon deployment. For example, lead22may extend intrathoracically superior under/below sternum24within anterior mediastinum26. Anterior mediastinum26may be viewed as being bounded posteriorly by pericardium16, laterally by pleurae28, and anteriorly by sternum24. In some examples, the anterior wall of anterior mediastinum26may also be formed by the transversus thoracis and one or more costal cartilages. Anterior mediastinum26includes a quantity of loose connective tissue (such as areolar tissue), some lymph vessels, lymph glands, substernal musculature (e.g., transverse thoracic muscle), and small vessels or vessel branches. In one example, the distal portion of lead22may be implanted substantially within the loose connective tissue and/or substernal musculature of anterior mediastinum26. In such examples, the distal portion of lead22may be physically isolated from pericardium16of heart18. A lead implanted substantially within anterior mediastinum26will be referred to herein as a substernal lead. Electrical stimulation, such as anti-arrhythmia pacing, cardioversion or defibrillation, provided by lead22implanted substantially within anterior mediastinum26may be referred to herein as substernal electrical stimulation, substernal pacing, impedance monitoring, substernal cardioversion, or substernal defibrillation. The distal portion of lead22is described herein as being implanted substantially within anterior mediastinum26. Thus, some of distal portion of lead22may extend out of anterior mediastinum26(e.g., a proximal end of the distal portion), although much of the distal portion may be positioned within anterior mediastinum26. In other embodiments, the distal portion of lead22may be implanted intrathoracically in other non-vascular, extra-pericardial locations, including the gap, tissue, or other anatomical features around the perimeter of and adjacent to, but not attached to, the pericardium16or other portion of heart18and not above sternum24or the ribcage. As such, lead22may be implanted anywhere within the “substernal space” defined by the undersurface between the sternum and/or ribcage and the body cavity but not including pericardium16or other portions of heart18. The substernal space may alternatively be referred to by the terms “retrosternal space” or “mediastinum” or “infrasternal” as is known to those skilled in the art and includes the anterior mediastinum26. The substernal space may also include the anatomical region described in Baudoin, Y. P., et al., entitled “The superior epigastric artery does not pass through Larrey's space (trigonum sternocostale).” Surg.Radiol.Anat. 25.3-4 (2003): 259-62 as Larrey's space. In other words, the distal portion of lead22may be implanted in the region around the outer surface of heart18, but not attached to heart18. For example, the distal portion of lead22may be physically isolated from pericardium16. Lead22may include an insulative lead body having a proximal end that includes connector30configured to be connected to IMD12and a distal portion that includes one or more electrodes. As shown inFIG.1A, the one or more electrodes of lead22may include electrodes32A,32B,34A, and34B, although in other examples, lead22may include more or fewer electrodes. Lead22also includes one or more conductors that form an electrically conductive path within the lead body and interconnect the electrical connector and respective ones of the electrodes. Electrodes32A,32B may be defibrillation electrodes (individually or collectively “defibrillation electrode(s)32”). Although electrodes32may be referred to herein as “defibrillation electrodes32,” electrodes32may be configured to deliver other types of anti-tachyarrhythmia shocks, such as cardioversion shocks. In some examples, defibrillation electrodes32A,32B may functionally be different sections of a single defibrillation electrode32, such that both defibrillation electrodes32are coupled to the same conductor or are otherwise configured to provide the same electrical stimulation. Though defibrillation electrodes32are depicted inFIGS.1A-IC as coil electrodes for purposes of clarity, it is to be understood that defibrillation electrodes32may be of other configurations in other examples, such as an elongated coil electrode. Defibrillation electrodes32may be located on the distal portion of lead22, where the distal portion of lead22is the portion of lead22that is configured to be implanted as extending along the sternum24. Lead22may be implanted at a target site below or along sternum24such that a therapy vector is substantially across a ventricle of heart18. In some examples, a therapy vector (e.g., a shock vector for delivery of anti-tachyarrhythmia shock) may be between defibrillation electrodes32and a housing electrode formed by or on IMD12, as discussed further below. The therapy vector may, in one example, be viewed as a line that extends from a point on defibrillation electrodes32(e.g., a center of one of the defibrillation electrodes32) to a point on a housing electrode of IMD12. As such, it may be advantageous to increase an amount of area across which defibrillation electrodes32(and therein the distal portion of lead22) extends across heart18. Accordingly, lead22may be configured to define a curving distal portion as depicted inFIG.1A. In some examples, the curving distal portion of lead22may help improve the efficacy and/or efficiency of pacing, sensing, and/or defibrillation to heart18by IMD12, in addition to the techniques for controlling the delivery of cardiac therapy described herein. Electrodes34A,34B may be pace/sense electrodes34A,34B (individually or collectively, “pace/sense electrode(s)34”) located on the distal portion of lead22. Electrodes34are referred to herein as pace/sense electrodes as they generally are configured for use in delivery of pacing pulses and/or sensing of cardiac electrical signals. In some instances, electrodes34may provide only pacing functionality, only sensing functionality, or both pacing functionality and sensing functionality. In the example illustrated inFIG.1AandFIG.1B, pace/sense electrodes34are separated from one another by defibrillation electrode32B. In other examples, however, pace/sense electrodes34may be both distal of defibrillation electrode32B or both proximal of defibrillation electrode32B. In examples in which lead22includes more or fewer electrodes32,34, such electrodes may be positioned at other locations on lead22. In some examples, IMD12may include one or more electrodes32,34on another lead (not shown). Other lead configurations may be used, such as various electrode arrangements. For example, one or more pace/sense electrodes34may be placed between two defibrillation electrodes32, such as described above. In an example, multiple pace/sense electrodes34may be placed between two defibrillation electrodes32. In an example, two defibrillation electrodes32may be adjacent (e.g., such that the two defibrillation electrodes32are not separated by any pace/sense electrodes34between the two defibrillation electrodes32). Other arrangements may additionally or alternatively be used. Lead22may define different sizes and shapes as may be appropriate for different purposes (e.g., for different patients or for different therapies). As discussed above, in some examples, the distal portion of lead22may have one or more curved sections. As shown in the example ofFIG.1A, the distal portion of lead22is a serpentine shape that includes two “C” shaped curves, which together may resemble the Greek letter epsilon, “e.” Defibrillation electrodes32are each carried by one of the two respective C-shaped portions of the lead body distal portion. The two C-shaped curves extend or curve in the same direction away from a central axis of the lead body. In some examples, pace/sense electrodes34may be approximately aligned with the central axis of the straight, proximal portion of lead22. In such examples, mid-points of defibrillation electrodes32are laterally offset from pace/sense electrodes34. Other examples of extra-cardiovascular leads including one or more defibrillation electrodes and one or more pace/sense electrodes34carried by curving, serpentine, undulating or zig-zagging distal portion of lead22also may be implemented using the techniques described herein. In some examples, the distal portion of lead22may be straight (e.g., straight or nearly straight). In some examples, the electrode arrangement on lead22may correspond to a geometry of lead22. For example, pace/sense electrodes34may be positioned on relative peaks of a curved lead shape, while defibrillation electrodes32may be positioned on relative valleys of the curved lead shape. In other examples, the distal portion of lead22may include branches, biased portions expanding away from a central shaft, or other shapes (e.g., with one or more of electrodes32,34disposed on the branches, shaft, or biased portions) that may provide appropriate monitoring information or therapy. Deploying lead22such that electrodes32,34are thusly at these depicted peaks and valleys of serpentine shape may therein increase an efficacy of system10. For example, electrodes32,34may have access to better sensing or therapy vectors when lead22is deployed into the serpentine shape, in addition to the techniques for controlling the delivery of cardiac therapy described herein. Orienting the serpentine shaped lead such that pace/sense electrodes34are closer to heart18may provide better electrical sensing of the cardiac signal and/or lower pacing capture thresholds than if pace/sense electrodes34were oriented further from heart18. The serpentine or other shape of the distal portion of lead22may have increased fixation to patient8as a result of the shape providing resistance against adjacent tissue when an axial force is applied. Another advantage of a shaped distal portion is that pace/sense electrodes34may have access to greater surface area over a shorter length of heart18relative to a lead having a straighter distal portion. In some examples, the elongated lead body of lead22may include one or more elongated electrical conductors (not illustrated) that extend within the lead body from the connector at the proximal lead end to electrodes32,34located along the distal portion of lead22. The one or more elongated electrical conductors contained within the lead body of lead22may engage with respective ones of electrodes32,34. In one example, each of electrodes32,34is electrically coupled to a respective conductor within lead22. The respective conductors may electrically couple to circuitry, such as a therapy module or a sensing module, of IMD12via connections in connector assembly, including associated feedthroughs. The electrical conductors transmit therapy from a therapy module within IMD12to one or more of electrodes32,34, and transmit sensed electrical signals from one or more of electrodes32,34to the sensing module within IMD12. In some examples, the elongated lead body of lead22may have a diameter of between 3 and 9 French (Fr), although lead bodies having diameters less than 3 Fr and more than 9 Fr may also be utilized. In another example, the distal portion and/or other portions of the lead body may have a flat, ribbon or paddle shape. In such examples, the width across the flat portion of the flat, ribbon or paddle shape may be between 1 and 3.5 mm. Other lead body designs may be used without departing from the scope of this disclosure. The lead body of lead22may be formed from a non-conductive material, including silicone, polyurethane, fluoropolymers, mixtures thereof, and other appropriate materials, and shaped to form one or more lumens within which the one or more conductors extend. However, the techniques are not limited to such constructions. In some examples, defibrillation electrodes32may have a length greater than 5 centimeters (cm) and less than 10 cm, or a length between about 2 cm to about 16 cm. In other examples, defibrillation electrodes32may be a flat ribbon electrode, paddle electrode, braided or woven electrode, mesh electrode, segmented electrode, directional electrode, patch electrode or other type of electrode besides an elongated coil electrode. Pace/sense electrodes34may comprise ring electrodes, short coil electrodes, hemispherical electrodes, segmented electrodes, directional electrodes, or the like. In some examples, pace/sense electrodes34may have substantially the same outer diameter as the lead body. In one example, pace/sense electrodes34may have surface areas between 1.6-55 mm2. Pace/sense electrodes34may, in some examples, have relatively the same surface area or different surface areas. Depending on the configuration of lead22, pace/sense electrodes34may be spaced apart by the length of defibrillation electrodes32, plus some insulated length on each side of defibrillation electrode32, e.g., approximately 2-16 cm. In other examples, such as when pace/sense electrodes34are between segments of a segmented defibrillation electrodes32, the electrode spacing may be smaller, e.g., less than 2 cm or less than 1 cm. The example dimensions provided above are exemplary in nature and should not be considered limiting of the examples described herein. In other examples, lead22may include a single pace/sense electrode34or more than two pace/sense electrodes34. In some examples, IMD12may include one or more housing electrodes (not shown) positioned on housing20of IMD12. Such housing electrodes may be formed integrally with an outer surface of hermetically-sealed housing20of IMD12, or otherwise may be coupled to housing20. In some examples, a housing electrode may be defined by an uninsulated portion of an outward facing portion of housing20of IMD12. In some examples, housing20may define one or more additional housing electrodes, which may be defined by corresponding divisions between insulated and uninsulated portions of housing20. In still other examples, substantially all of housing20may be uninsulated, such that substantially all of housing20defines a housing electrode. In general, system10may sense electrical signals, such as via one or more sensing vectors that include combinations of pace/sense electrodes34and/or a housing electrode of IMD12. In some examples, IMD12may sense cardiac electrical signals using a sensing vector that includes one or both of the defibrillation electrodes32and/or one of defibrillation electrodes32and one of pace/sense electrodes34or a housing electrode of IMD12. The sensed electrical intrinsic signals may include electrical signals generated by cardiac muscle and indicative of depolarizations and repolarizations of heart18at various times during the cardiac cycle. IMD12may be configured to analyze the electrical signals sensed by the one or more sensing vectors to detect tachyarrhythmia, such as ventricular tachycardia (VT) or ventricular fibrillation (VF). In response to detecting the tachyarrhythmia, IMD12may begin to charge a storage element, such as a bank of one or more capacitors, and, when charged, delivers substernal electrical stimulation therapy, e.g., ATP, cardioversion or defibrillation shocks, and/or post-shock pacing in response to detecting tachycardia (e.g., VT or VF). In some examples, IMD12may generate and deliver bradycardia pacing in addition to ATP, cardioversion or defibrillation shocks, and/or post-shock pacing. Processing circuitry of IMD12may sense patient parameters indicative of a current heart position status of heart18based on signals sensed via one or more sensors of system10. It should be noted that although such processing circuitry may be contained within IMD12and/or within another device of system10(e.g., external device38), the processing circuitry is described herein as being a component of IMD12for the sake of clarity. In some examples, processing circuitry of system10may determine a current posture of patient8and/or a respiratory state of patient8. For example, system10may include one or more accelerometers or gyrometers (not shown). The one or more accelerometers may comprise one or more three-axis accelerometers. In some examples, such accelerometers or gyrometers may be a component of IMD12of system10. Signals generated by such sensors may be indicative of, for example, a current posture of patient8, such as an upright posture, a seated posture, a supine or prone posture, or other postures. In some examples, heart18may be positioned up to about 6 cm more caudal, relative to a baseline position, when patient8is in an upright posture compared to when patient8is in a supine posture. In some examples, processing circuitry of IMD12may determine a respiratory state of patient8by determining one or more of an inhalation phase, a respiratory depth, or a respiratory rate of patient8. In some such examples, processing circuitry of IMD12may compare the respiratory depth and/or the respiratory rate to one or more corresponding threshold(s). If the respiratory depth and/or respiratory rate satisfy the corresponding threshold(s), the processing circuitry may, for example, identify the respiratory depth as being “deep” and/or respiratory rate as being “elevated.” In some such examples, processing circuitry of IMD12may further identify a magnitude of such aspects of patient8's respiratory state, such as identify the respiratory depth as being “moderately deep” or “very deep.” Such magnitudes of the respiratory state may correspond to different position states of heart18. For example, heart18may be positioned more caudal (e.g., up to about 2-4 cm more caudal) during very deep respiration than during moderately deep respiration. In some examples, processing circuitry of IMD12may determine a respiration state of patient8based on an impedance between two or more electrodes (e.g., two or more of pace/sense electrodes34and/or a housing electrode on housing20). In other examples, system10may include one or more other sensors configured to determine a respiration state of patient8, such as a microphone configured to detect sounds associated with respiration of patient8, a magnetometer configured to measure changes in dimensions of anatomical structures of the thorax of patient8during respiration, or a pressure sensor configured to measure changes in pressure exerted on lead22associated with changes in respiration state. In some examples, an accelerometer may produce a signal that varies based on respiration, e.g., based on vibrations and/or movement associated with respiration. Regardless of the configuration of such sensors, processing circuitry of IMD12may determine a posture of patient8and/or a respiration state of patient8based on the signals obtained therefrom, and associate the posture and/or respiration state of patient8with a current heart position state of heart18of a plurality of heart position states stored in a memory of system10and in association with a respective modification of at least one of a plurality of cardiac sensing, cardiac therapy, or vector parameters. In some examples, processing circuitry of IMD12then may modify at least one cardiac therapy and/or sensing parameter value, according to the modification associated with the current heart position state of heart18, and control the delivery of the cardiac therapy. For example, processing circuitry of IMD12may control delivery of (and IMD may thus deliver) an anti-tachyarrhythmia shock via defibrillation electrodes32, and or cardiac pacing via pace/sense electrodes34, according to the modified at least one cardiac therapy parameter value. In some other examples, processing circuitry of IMD12may modify an electrode vector, such as a cardiac therapy delivery vector that includes at least two of defibrillation electrodes32or a sensing vector that includes at least two of pace/sense electrodes34, based on the current heart position state of heart18. In such examples, processing circuitry of IMD12may control IMD12to at least sense a cardiac electrogram via the modified sensing vector or deliver cardiac therapy via the modified electrode vector. In some examples, processing circuitry of IMD12may modify a cardiac therapy and/or sensing parameter value, according to a modification associated with patient8's current heart position state in a memory of system10, by modifying a tachyarrhythmia detection parameter. For example, the tachyarrhythmia detection parameter may be a threshold heart rate (e.g., a certain number of beats per minute over a baseline heart rate) of patient8. If satisfied, the threshold heart rate may indicate that patient8is experiencing tachyarrhythmia. However, patient8may have a higher baseline heart rate during inhalation than during exhalation. Thus, processing circuitry of IMD12may modify the tachyarrhythmia detection threshold by increasing the tachyarrhythmia detection threshold heart rate when patient8's heart position state corresponds to an inhalation phase of respiration. In some examples, using an increased tachyarrhythmia detection threshold heart rate during inhalation may improve a tachyarrhythmia-detection accuracy of system10, such as by reducing a possibility of false-positive tachyarrhythmia detections. In some examples, reducing a possibility of false-positive tachyarrhythmia detections may reduce a possibility of delivering unnecessary anti-tachyarrhythmia shocks to patient8, which may avoid associated and unnecessary discomfort to patient8and/or avoid unnecessary depletion of a power source of system10. In some examples, processing circuitry of IMD12may modify a cardiac therapy and/or sensing parameter value by which IMD12may deliver cardiac therapy, according to a modification associated with patient8's current heart position state in a memory of system10, by modifying a cardiac electrogram sensing amplitude threshold. For example, a clinician may select a baseline cardiac electrogram sensing amplitude threshold, when programming IMD12, that may enable IMD12to sense a target portion of a cardiac electrogram via pace/sense electrodes34(e.g., an R-wave) for tachyarrhythmia detection when one or more of pace/sense electrodes34are positioned approximately over a particular portion of heart18such as a ventricle. However, when heart18is caudal to a baseline position, such as when patient8is in an upright posture and/or inhaling, one or more other portions of the sensed electrogram (e.g., a T-wave) may be more prominent. In some such examples, oversensing of a T-wave or other non-target portions of the electrogram by IMD12may result in an increased possibility of a false-positive detection of tachyarrhythmia. Thus, a clinician may program IMD12to increase a sensing threshold amplitude associated with such other portions of the cardiac electrogram to reduce a possibility of false-positive tachyarrhythmia detection. For example, processing circuitry of IMD12may modify the cardiac electrogram sensing amplitude threshold by increasing a cardiac electrogram sensing amplitude threshold of IMD12when a current position of heart18is caudal to a baseline position, which may enable system10to better sense a target portion of the cardiac electrogram when heart18is in such positions. In some examples, processing circuitry of IMD12may modify a cardiac therapy parameter value, according to a modification associated with patient8's current heart position state in a memory of system10, by modifying an anti-tachyarrhythmia shock magnitude or an amplitude of one or more pacing pulses by which IMD12may deliver cardiac therapy. For example, a clinician may select a baseline anti-tachyarrhythmia shock magnitude or a baseline pacing pulse amplitude (e.g., when programming IMD12) that may effectively treat a tachyarrhythmia or provide pacing capture of heart18when defibrillation electrodes32or pace/sense electrodes34are positioned approximately over heart18. However, when heart18is caudal to a baseline position, such as when the patient8is upright and/or inhaling, one or more of defibrillation electrodes32or pace/sense electrodes34may no longer be positioned approximately over heart18. In such cases, an efficacy of an anti-tachyarrhythmia shock delivered by IMD12may be reduced or pacing capture may not be maintained during delivery of cardiac pacing pulses by IMD12. Thus, in such examples, processing circuitry of IMD12may modify a cardiac therapy parameter by increasing an anti-tachyarrhythmia shock magnitude or increasing an amplitude of one or more pacing pulses when patient8's posture and/or respiratory state corresponds to a lower heart position state. In some such examples, increasing an anti-tachyarrhythmia shock magnitude or increasing an amplitude of one or more pacing pulses may improve an efficacy of anti-tachyarrhythmia shock therapy or pacing capture, which may result in an improved clinical outcome of cardiac therapy for patient8, compared to example cardiac therapy techniques that do not take into account a position of a patient's heart. In some examples, processing circuitry of IMD12may modify a cardiac therapy parameter value, according to a modification associated with a current heart position state of patient8in a memory of system10, by modifying at least one of a sensing vector that includes at least two of pace/sense electrodes34or a shock vector that includes at the least two of defibrillation electrodes32. As discussed above, when heart18is caudal to a baseline position, one or more of pace/sense electrodes34and/or defibrillation electrodes32may no longer be positioned approximately over heart18. In such examples, processing circuitry of IMD12may be configured to modify at least one electrode vector by removing one or more of pace/sense electrodes34from the sensing vector or by removing one or more of defibrillation electrodes32from the shock vector. The electrodes34or32removed from the sensing vector or shock vector may be the ones of electrodes32,34no longer positioned over heart18when heart18is caudal to a baseline position. For example, processing circuitry of IMD12may remove from an electrode vector one or more electrodes positioned on a distal portion of the lead, such as one or more most-superior or most-distal ones of electrodes32or34. In some examples, an anti-tachyarrhythmia shock delivered by IMD12using such a modified shock vector may improve an efficacy of anti-tachyarrhythmia shock therapy, such as by increasing shock impedance. In some examples, the increased shock impedance of the modified shock vector may more efficiently direct and sustain delivery of energy to heart18via the remaining ones of defibrillation electrodes32of the modified shock vector, even though IMD12may deliver less total energy via the modified shock vector than via a corresponding unmodified shock vector. In any such examples in which processing circuitry of IMD12may modify a cardiac therapy parameter value, according to a modification associated with patient8's current heart position state in a memory of system10, the magnitude of the modification may be based on a magnitude of heart movement associated with patient8's current heart position state. For example, a magnitude of a modification of a cardiac therapy parameter value associated with a first heart position state in which heart18is relatively more caudal may be greater than a magnitude of a modification of a cardiac therapy parameter associated with a second heart position state in which heart18is caudal to the first heart position state, but still more caudal than a baseline position. In some examples, a difference in positions of heart18between such heart positions states may be several centimeters, such as up to about 4 cm. In some techniques in which processing circuitry of IMD12may control IMD12to at least sense a cardiac electrogram of patient8via a modified vector or deliver cardiac therapy to heart18via a modified electrode vector, processing circuitry of IMD12may control system10to deliver cardiac therapy by controlling IMD12to deliver cardiac pacing to heart18via the modified vector. As discussed above, when the heart18is positioned caudal to a baseline position, such as when patient8is upright and/or inhaling, one or more of, pace/sense electrodes34may no longer be positioned approximately over a portion of heart18to which it may be desirable to deliver pacing pulses, such as a ventricle of heart18. Instead, one or more of the electrodes may be positioned over an atrium of heart18when heart18is positioned caudal to a baseline position. Pacing pulses delivered via one or more of pace/sense electrodes34positioned over a non-target portion of heart18may not contribute to pacing efficacy and may reduce energy efficiency of system10. Thus, it may be desirable to remove such electrodes34from a pacing vector when heart18is positioned caudal to a baseline position, which may improve pacing efficiency. In any of the example techniques described herein, the modifications to the cardiac therapy parameters and/or electrode vectors of system10may be selected by a clinician and programmed into a memory of system10. In some examples, the clinician may select the modifications (e.g., a parameter to modify and/or a magnitude of a modification) based on one or more physiological aspects of patient8that may be associated with an amount and/or direction that heart18may move with changes in the posture and/or respiration state of patient8. In some examples, the clinician may directly observe (e.g., via fluoroscopy) an amount and/or direction of movement of heart18with changes in the posture and/or respiration state of patient8. In such examples, processing circuitry of IMD12may automatically or semi-automatically modify values of one or more cardiac therapy parameters and/or sensing parameters as heart18moves with changes in the posture and/or respiration state of patient8, which may improve the efficacy and/or efficiency of the cardiac therapy delivered by system10, such as by more accurately directing energy to target portions of the heart and/or by reducing the delivery of energy to non-target locations. In some examples, system10may include external device38. External device38may be a computing device that is configured for use in a home, ambulatory, clinic, or hospital setting to communicate with IMD12via wireless telemetry. Examples of communication techniques used by IMD12and external device38include radiofrequency (RF) telemetry, which may include an RF link established via Bluetooth, wireless local networks, or medical implant communication service (MICS). The communication may include one-way communication in which one device is configured to transmit communication messages and the other device is configured to receive those messages. Alternatively, or additionally, the communication may include two-way communication in which each device is configured to transmit and receive communication messages. External device38may include communication circuitry configured to communicate with one or more devices of system10(e.g., IMD12) in accordance with the techniques described above. For example, external device38may be used to program commands or operating parameters of IMD12for controlling functioning of IMD12when external device38is configured as a programmer for IMD12. External device38may be used to communicate with IMD12to retrieve data such as operational data, physiological data accumulated in IMD memory, or the like. As such, external device38may function as a programmer for IMD12, an external monitor for IMD12, or a consumer device such as a smartphone. External device38may be coupled to a remote patient monitoring system, such as CARELINK®, available from Medtronic plc, of Dublin, Ireland. In other examples, a clinician may use external device38to program or update therapy parameters that define cardiac therapy, and/or program or update modifications to the cardiac therapy parameters, sensing parameters, and/or electrode vectors associated with the plurality of heart position states, or perform other activities with respect to IMD12. The clinicians may be a physician, technician, surgeon, electrophysiologist, or other healthcare professional. In some examples, the user may be patient8. Although described herein in the context of example IMD12, the techniques for controlling the delivery of cardiac therapy described herein may be implemented with other types of IMD configured to deliver cardiac therapy. In some examples, the techniques described herein may be implemented with an external defibrillation device, or other devices or systems configured to deliver cardiac therapy. In some examples, system10also may include an implantable monitoring device, such as the Reveal LINQ™, commercially available from Medtronic plc. FIG.2is a functional block diagram illustrating an example configuration of IMD12ofFIGS.1A-IC, which may be used to perform any of the techniques described with respect toFIGS.1A-IC. As shown inFIG.2, IMD12includes processing circuitry102, sensing circuitry104, therapy delivery circuitry106, sensors108, communication circuitry110, and memory112. In addition, IMD12includes one or more electrodes116, which may be any one or more of the previously-described electrodes of IMD12, one or more of which may be carried by lead22or disposed on housing20of IMD12. In some examples, memory112includes computer-readable instructions that, when executed by processing circuitry102, cause IMD12and processing circuitry102to perform various functions attributed to IMD12and processing circuitry102herein. Memory112may include any volatile, non-volatile, magnetic, optical, or electrical media, such as a random access memory (RAM), read-only memory (ROM), non-volatile RAM (NVRAM), electrically-erasable programmable ROM (EEPROM), flash memory, or any other digital media. Processing circuitry102may include fixed function circuitry and/or programmable processing circuitry. Processing circuitry102may include any one or more of a microprocessor, a controller, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or equivalent discrete or analog logic circuitry. In some examples, processing circuitry102may include multiple components, such as any combination of one or more microprocessors, one or more controllers, one or more DSPs, one or more ASICs, or one or more FPGAs, as well as other discrete or integrated logic circuitry. The functions attributed to processing circuitry102herein may be embodied as software, firmware, hardware or any combination thereof. In some examples, processing circuitry102may receive (e.g., from external device38), via communication circuitry110, a respective value for each of a plurality of cardiac sensing parameters, cardiac therapy parameters (e.g., anti-tachyarrhythmia shock therapy parameters and/or cardiac pacing parameters), and/or electrode vectors. Processing circuitry102may store such parameters and/or electrode vectors in therapy and sensing programs118of memory112. Processing circuitry102also may receive, in association with each of a plurality of heart position states120, a respective modification122of at least one of the cardiac sensing parameters, cardiac therapy parameters, and/or electrode vectors. Processing circuitry102may store the heart position states in heart position states120of memory112, and may store the respective modifications in modifications122of memory112. The modifications may take the form of, as examples, a look-up table or other data structure, or a function. Processing circuitry102may monitor a posture and/or a respiration state of patient8via one or more of electrodes116and sensors108. In some examples, processing circuitry102may determine a respiration state of patient8based on an impedance between two or more of electrodes116, such as between housing20of IMD12, which may function as a housing electrode, and an electrode positioned on lead22. Therapy delivery circuitry106and/or sensing circuitry104may include circuitry to generate a signal, e.g., current or voltage source circuitry, having a known current or voltage amplitude, and switching circuitry to couple the signal to selected ones of electrodes116. Sensing circuitry104may include circuitry to sample the signal and measure the other of voltage or current. Processing circuitry102may determine an impedance value associated with the impedance signal based on such measurements. In other examples, processing circuitry102may determine a respiration state based on variations in an amplitude of an electrogram (EGM) signal sensed by electrodes116that may be associated with movement of heart18that may occur with respiration of patient8. In some examples, sensors108may include one or more gyrometers and/or accelerometers. In some examples, such accelerometers may comprise one or more three-axis accelerometers. Signals generated by such gyrometers and/or accelerometers may be indicative of a current posture of patient8, such as an upright posture, a seated posture, a supine or prone posture, or other postures. In some examples, an accelerometer may produce a signal that varies based on respiration, e.g., based on vibrations and/or movement associated with respiration. For example, a cyclic component of a signal sensed by an accelerometer may be associated with movement of IMD12as the implant site of IMD12within patient8moves (e.g., tilts) with respiration. In other examples, sensors108may include one or more other sensors configured to determine a respiration state of patient8, such as a microphone configured to detect sounds associated with respiration of patient8, a magnetometer configured to sense motion within the earth's magnetic field associated with respiration of patient8, or a pressure sensor configured to measure changes in pressure exerted on lead22associated with changes respiration state. Sensing circuitry106may include filters, amplifiers, and/or analog-to-digital conversion circuitry, as examples, to condition any of these sensed signals for analysis by processing circuitry102and/or to detect features of the signals. For example, sensing circuitry106may condition an EGM signal to extract variations in an amplitude of the EGM signal that may be associated with movement of heart18occurring with respiration of patient8. In examples in which sensors108include a microphone, sensing circuitry106may condition an audio signal sensed by a microphone to separate respiratory sounds from interfering background noise. In examples in which sensors8include a pressure sensor, sensing circuitry106may condition a signal sensed by a pressure sensor to filter out variations caused by atmospheric pressure. In any such examples, processing circuitry102may determine a posture of patient8and/or a respiration state of patient8based on the signals obtained from electrodes116and sensors108, and may associate patient8's posture and/or respiration state with a current heart position state of heart18of the plurality of heart position states120stored in heart position states120of memory112. In some examples, heart position states120may be defined by one or more threshold posture and/or respiration values. For example, one or more of heart position states120may be defined, at least in part, by a threshold respiration depth value. In such examples, processing circuitry102may determine a respiration depth value associated with patient8's respiration state, such as by determining an impedance between two or more of electrodes116and/or analyzing signals sensed by one or more of sensors108, and determine a current heart position state of heart18based, at least in part, on whether the determined respiration depth value satisfies the threshold respiration depth value. In examples in which a heart position state is associated with a posture, the heart position state may be defined, at least in part, by a threshold value associated with the posture. In such examples, the threshold value may be a value (e.g., a voltage) derived from a signal sensed by one or more accelerometers of sensors108. For example, sensing circuitry106may condition a signal sensed by the one or more accelerometers for analysis by processing circuitry102, such as by applying a low-pass filter to the signal. Processing circuitry102then may analyze the conditioned signal to determine a value associated with a current posture of patient8, and determine a current position state of heart18based, at least in part, on whether the determined value satisfies the threshold value associated with the posture. After determining patient8's current heart position state as being one of heart position states120, processing circuitry102may modify at least one of a cardiac sensing parameter value, a cardiac therapy parameter value, and/or an electrode vector according to one or more of modifications122associated with the current heart position state. Example parameters include a cardiac pacing magnitude (e.g., a pulse amplitude or width), an anti-tachyarrhythmia shock magnitude (e.g., a pulse amplitude, pulse width, and/or shock energy), or cardiac electrogram sensing parameter, such as a threshold amplitude for detecting an R-wave, P-wave, or other feature of the cardiac electrogram. In some examples, the parameter is a tachyarrhythmia detection parameter, which may be a cardiac electrogram sensing parameter used by processing circuitry102to detect tachyarrhythmia. In some examples, the parameter is an electrode vector of a plurality of electrodes116, which processing circuitry102may use to sense a cardiac electrogram and/or deliver cardiac therapy via the modified electrode vector. Processing circuitry102then may control IMD12to deliver cardiac therapy via therapy delivery circuitry106according to the modified sensing parameter value, cardiac therapy parameter value, and/or electrode vector. In some other examples, processing circuitry102may modify an electrode vector that includes at least two defibrillation electrodes of electrodes116or at least two pace/sense electrodes of electrodes116, based on the current heart position state, and control IMD12to at least sense a cardiac electrogram via electrodes116and sensing circuitry104, or deliver cardiac therapy via electrodes116and therapy delivery circuitry106. In some examples, processing circuitry102may modify a cardiac therapy parameter value, according to one or more of modifications122associated with patient8's current heart position state in memory112, in accordance with the techniques described above with respect toFIGS.1A-IC. For example, processing circuitry102modify a cardiac therapy parameter value, according to a modification122associated with patient8's current heart position state120, by modifying a tachyarrhythmia detection parameter and/or a cardiac electrogram sensing amplitude threshold stored in therapy and sensing programs118. In other examples, processing circuitry102may modify a cardiac therapy parameter value, according to a modification122associated with patient8's current heart position state120, by modifying an anti-tachyarrhythmia shock magnitude or an amplitude of one or more pacing pulses stored in therapy and sensing programs118. In still other examples, processing circuitry102may modify a cardiac therapy parameter value, according to a modification associated with patient8's current heart position state in memory112, by modifying at least one of a sensing vector that includes at least two pace/sense electrodes of electrodes116, or a shock vector that includes at least two defibrillation electrodes of electrodes116. In some other examples, processing circuitry102may control IMD12to at least sense a cardiac electrogram of patient8via a modified electrode vector or deliver cardiac therapy to heart18via a modified electrode vector, in accordance with the techniques described above with respect toFIGS.1A-IC. For example, processing circuitry102may modify a vector including at least two of electrodes116, according to a modification122associated with patient8's current heart position state120, by modifying an electrode vector stored in therapy and sensing programs118. In any such examples, processing circuitry102also may control IMD12to sense a cardiac electrogram or deliver cardiac therapy (e.g., anti-tachyarrhythmia shock or cardiac pacing), via electrodes116and sensing circuitry104or therapy delivery circuitry106, based on one or more of the modified sensing parameter value, cardiac therapy parameter value, or modified electrode vector. Processing circuitry102thus may improve the efficacy and/or efficiency of the cardiac therapy delivered by system10, such as by more accurately directing energy to target portions of the heart and/or by reducing the delivery of energy to non-target locations. A clinician may enter, or processing circuitry102may itself determine and recommend, one or more of therapy and sensing programs118, heart position states120, or modifications122into memory112, such as via external device38or a remote computer, based on one or more factors such as a gender, age, size, or observed heart motion of patient8. The factors may be provided to processing circuitry102by the clinician, e.g., via external device38. In some examples, the clinician and/or processing circuitry102may update one or more of therapy and sensing programs118, heart position states120, or modifications122periodically or on an as-needed basis. For example, the clinician or processing102circuitry may determine that a magnitude of one or more of modifications122has not effectively countered a reduction in cardiac therapy efficacy or efficiency associated with a particular one of heart position states120, such as based on data stored in diagnostics/feedback124of memory112. In some such examples, the clinician may enter one or more updated values of modifications122into external device38or a remote computer, or processing circuitry102may recommend updated values via external device38. Processing circuitry102then may receive the updated modifications122(or, in other examples, updated therapy and sensing programs118and/or heart positions120) from the external device38or the remote computer, and may store such updated values in memory112. Diagnostics/feedback124of memory112may include data pertaining to one or more of determined heart position states of patient8, determined postures and/or respiration states of patient8, or the efficacy or efficiency of cardiac therapy delivered by IMD12. For example, diagnostics/feedback124may store efficacy determinations made by processing circuitry102based on whether cardiac therapy delivered by IMD12according to one or more modified cardiac therapy parameter values and in association with one of heart position states120was successful in terminating a tachyarrhythmia or maintaining pacing capture. Diagnostics/feedback124also may store efficiency determinations made by processing circuitry102based on data pertaining to, e.g., an amount of energy delivered to treat a tachyarrhythmia or maintain pacing capture when processing circuitry102controls IMD12to deliver cardiac pacing according to one or more cardiac therapy parameter values and in association with one of the heart position states120. In some examples, a clinician may review such efficacy and/or efficiency determinations stored in diagnostics/feedback124, and use such data in determining whether to update one or more of therapy and sensing programs118, heart positions120, or modifications122. In some examples, diagnostics/feedback124may store system diagnostics pertaining to the functioning of IMD12or other components of a medical device system including IMD12. Therapy and sensing programs118may include values of one or more therapy and sensing parameters. In some examples, ones of therapy and sensing programs118may correspond to a type of cardiac therapy, such as anti-tachyarrhythmia shock therapy or cardiac pacing therapy. For example, one of therapy and sensing programs118may include values of one or more sensing parameters and one or more therapy parameters that may be appropriate for the sensing a heart rate of patient8during cardiac pacing therapy and delivering cardiac pacing therapy to heart18, such as a sensing amplitude threshold, a pacing pulse amplitude or width, sensing or pacing electrode vectors, or pulse delivery timing. Another one of therapy and sensing programs118may include values of one or more sensing parameters and one or more therapy parameters that may be appropriate for sensing a heart rate of patient8during tachyarrhythmia detection, and delivering anti-tachyarrhythmia therapy to heart18, such as a tachyarrhythmia sensing amplitude threshold, an anti-tachyarrhythmia shock magnitude, tachyarrhythmia sensing electrode vectors or defibrillation electrode vectors, or anti-tachyarrhythmia shock delivery timing. Sensing circuitry104and therapy delivery circuitry106may be selectively coupled to electrodes116, e.g., via switching circuitry (not shown) as controlled by processing circuitry102. The switching circuitry may include one or more transistors or other circuitry for selectively coupling electrodes116to other circuitry of IMD12. Sensing circuitry104may monitor signals from electrodes116in order to monitor electrical activity of heart (e.g., to detect depolarizations for heart rate determination and/or to produce a cardiac electrogram for morphological or other analyses). Sensing circuitry104(or therapy delivery circuitry106may also generate a signal via electrodes116, from which sensing circuitry104may produce a thoracic impedance signal, from which sensing circuitry104and/or processing circuitry102may sense respiration, e.g., magnitude and/or rate. Sensing circuitry104may also monitor signals from one or more other sensor(s)108, such as the one or more accelerometers, gyrometers, magnetometers, barometers, or other sensors configured to determine a posture and/or respiration state of patient8. Sensing circuitry104may monitor signals from any electrodes or other sensors that may be positioned on IMD12or on another device in communication with IMD12. In some examples, sensing circuitry104may include one or more filters and amplifiers for filtering and amplifying signals received from one or more of electrodes116and/or the one or more sensor(s)108. Sensing circuitry104may also include rectification circuitry, sample-and-hold circuitry, one or more comparators, and/or analog-to-digital conversion circuitry. The functionality provided by such circuitry may be applied to the signal in the analog or digital domain. Therapy delivery circuitry106may include circuitry for generating a signal, such as one or more capacitors, charge pumps, and/or current sources, as well as circuitry for selectively coupling the signal to electrodes116, e.g., transistors or other switching circuitry. Communication circuitry110may include any suitable hardware, firmware, software or any combination thereof for communicating with another device, such as external device38, or another IMD or sensor, such as a pressure sensing device. For example, communication circuitry110may include voltage regulators, current generators, oscillators, or circuitry for generating a signal, resistors, capacitors, inductors, and other filtering circuitry for processing received signal, as well as circuitry for modulating and/or demodulating a signal according to a communication protocol. Communication circuitry110may also include transistors or other switching circuitry for selectively coupling transmitted signal to or receiving signals from an antenna of IMD12(not shown) or electrodes116(e.g., in the case of tissue conductance communication (TCC)). Under the control of processing circuitry102, communication circuitry110may receive downlink telemetry from, as well as send uplink telemetry to, external device38or another device. In some examples, communication circuitry110may communicate with external device38. In addition, communication circuitry110may communicate with a networked computing device via an external device (e.g., external device38) and a computer network, such as the Medtronic CareLink® Network developed by Medtronic, plc, of Dublin, Ireland, as further described below with respect toFIG.3. A clinician or another user may retrieve data from IMD12using external device38, or by using another local or networked computing device (e.g., a remote computer located with the clinician) configured to communicate with processing circuitry102via communication circuitry110. In some examples, the clinician may also program parameters of IMD12using external device38or another local or networked computing device. For example, the clinician may update heart position states120, modifications122, and/or values associated with therapy and sensing programs118. Although processing circuitry102of IMD12is described above as being configured to receive signals from sensors108, determine a current heart position status of patient8, modify at least one cardiac therapy parameter value, cardiac sensing parameter value, and/or electrode vector according to modifications122associated with the current heart position state and control IMD12to deliver cardiac therapy and/or sense a cardiac electrogram according to the modified at least one cardiac therapy parameter value, modified cardiac sensing parameter value or modified electrode vector, and carry out other steps of the techniques described herein, any steps described herein as being carried out by processing circuitry102of IMD12may be carried out by processing circuitry of one or more other devices. For example, processing circuitry of external device38, a remote computer, or any other suitable implantable or external device or server, may be configured to carry out one or more of the steps of the techniques described herein, such as via communication circuitry110of IMD12. FIG.3is a functional block diagram illustrating an example system that includes an access point160, a network162, external computing devices, such as a server164, and one or more other computing devices170A-170N, which may be coupled to IMD12ofFIG.2and external device38via network162. In this example, IMD12may use communication circuitry110to communicate with external device38via a first wireless connection, and to communicate with an access point160via a second wireless connection. In the example ofFIG.3, access point160, external device38, server164, and computing devices170A-170N are interconnected and may communicate with each other through network162. Access point160may comprise a device that connects to network162via any of a variety of connections, such as telephone dial-up, digital subscriber line (DSL), or cable modem, or other suitable connections. In other examples, access point160may be coupled to network162through different forms of connections, including wired or wireless connections. In some examples, access point160may be a user device, such as a tablet or smartphone, that may be co-located with the patient. In some examples, IMD12may be configured to transmit data, such as cardiac therapy delivery efficacy and/or efficiency data stored in diagnostics/feedback124of memory112, to external device38. In addition, access point160may interrogate IMD12, such as periodically or in response to a command from patient8, a clinician, or network162, in order to retrieve therapy and sensing programs118, heart position states120, modifications122, diagnostics/feedback124, or other information stored in memory112of IMD12. Access point160may then communicate the retrieved data to server164via network162. In some cases, server164may be configured to provide a secure storage site for data collected from IMD12and/or external device38. In some cases, server164may assemble data in web pages or other documents for viewing by trained professionals, such as clinicians, via computing devices170A-170N. One or more aspects of the illustrated system ofFIG.3may be implemented with general network technology and functionality, which may include or be similar to that provided by the Medtronic CareLink® Network developed by Medtronic plc, of Dublin, Ireland. In some examples, the network technology and functionality may validate a communication transmitted from device, such as a device purporting to be one or more of computing devices170A-170N (e.g., a purported remoter computer located with a clinician) toward IMD12. In some examples, such security features may protect the cardiac therapy delivered by IMD12to patient8from being disrupted, hacked, or otherwise altered by communications originating from unauthorized sources. In some examples, one or more of computing devices170A-170N (e.g., device170A) may be a remote computer, such as a smartphone, tablet or other smart device located with a clinician, by which the clinician may program, receive alerts from, and/or interrogate IMD12. For example, the clinician may access patient requests, symptoms, undesired effects, and/or efficacy indications through device170A, such as when patient8is in between clinician visits, such as to check on one or more aspects of cardiac therapy delivered by IMD12, as desired. In some examples, the clinician may enter medical instructions for patient8into an application in device170A, such as an instruction for patient8to schedule a visit with the clinician or for patient8to seek other medical attention, based on data retrieved from IMD12by device170A, or based on other patient data known to the clinician. Device170A then may transmit the instructions for medical intervention to a receiving device located with patient8. FIGS.4-6are flow diagrams illustrating various example techniques related to controlling the delivery of cardiac pacing by IMD12to heart18according to a requested value of a therapy parameter, in accordance with examples of this disclosure. As described herein, the example techniques illustrated inFIGS.4-6may be employed using IMD12and an external device (e.g., external device38ofFIG.1A), in conjunction with patient8as described above with respect toFIGS.2and3. Although described as being performed by IMD12, the techniques ofFIGS.4-6may be performed, in whole or in part, by processing circuitry and memory of other devices of a medical device system, as described herein. For example, although processing circuitry102of IMD12is described as carrying out most of the example techniques illustrated inFIGS.4-6for the sake of clarity, in other examples, one or more devices (e.g., a remote computer located with a clinician or other external device or server) may carry out one or more steps attributed herein to processing circuitry102of IMD12. FIG.4is a flow diagram illustrating an example technique for determining, in association with each of a plurality of heart position states, a respective modification of at least one therapy or sensing parameter. In the example technique ofFIG.4, processing circuitry102may determine a heart position state of patient8for each of a plurality of postures and/or for each of a plurality of respiration states (180), such as by receiving, from external device38or another external device, one or more indications by a user of postures and/or respiratory states of patient8for which system10should modify therapy and/or sensing parameters. For each of the plurality of heart position states, processing circuitry102then determines a difference between the heart position state and baseline heart position state (182). In some examples, the baseline heart position state may be a position of heart18when patient8is supine and not breathing deeply, rapidly, or inhaling. Each heart position state of the plurality of heart position states may differ from the baseline heart position state by a magnitude of a distance between a portion of heart18(e.g., an apex) in the baseline state and the portion of the heart in the heart position state. In some examples, processing circuitry102may determine the distance as being a distance in a single plane (e.g., a transverse plane), or as being a distance in more than one plane (e.g., a transverse plane and a frontal plane). Next, for each heart position state of the plurality of heart position states, processing circuitry102determines one or more respective modifications of one or more of a sensing parameter, a therapy delivery parameter, and/or an electrode vector based on the difference between the heart position state and the baseline heart position (184). In some examples, a magnitude of a modification of a cardiac therapy parameter determined by processing circuitry102may correspond to the magnitude of the difference between the heart position state and the baseline heart position. For example, heart position states in which heart18is relatively further from the baseline heart positions may be associated with a modification of a cardiac therapy parameter that is greater than a modification of the same cardiac therapy parameter associated with a heart position state that is relatively closer to the baseline heart position. Next, for each heart position state of the plurality of heart position states, processing circuitry102stores the determined one or more respective modifications of the one or more therapy or sensing in a memory of system10, such as in modifications122of memory112of IMD12(186). In some examples, processing circuitry102may store such modifications in memory112in association with a particular heart position state. For example, processing circuitry102may store a modification of tachyarrhythmia detection parameter and an anti-tachyarrhythmia shock magnitude in association with a particular heart position state, such as a heart position state in which heart18is caudal to a baseline position. In other examples, processing circuitry may store a single modification of one or more of therapy or sensing parameters in memory112in association with a heart position state. FIG.5is a flow diagram illustrating an example technique for modifying at least one therapy or sensing parameter value according to a modification associated with a current heart position state of heart18based on determining that a heart position state of heart18has changed. In the example technique ofFIG.5, processing circuitry102may determine a current position state of heart18of patient8(190). For example, as discussed above with respect toFIGS.1A-2, processing circuitry102may determine a current position state of heart18based on one or more of a current posture or a current respiration state of patient8, which processing circuitry102may determine based on signals received from electrodes of system10(e.g., pace/sense electrode34or sensing electrodes of electrodes116) or from one or more sensors108, such as in the manner described above with respect toFIG.2. Next, processing circuitry102determines whether the heart position state of heart18has changed (192). In some examples, processing circuitry102may determine whether the heart position state of heart18has changed relative to the baseline heart position state described above (e.g., with respect toFIG.4). In other examples, processing circuitry102may determine whether the heart position state of heart18has changed relative to a previously-determined heart position state of heart18determined by processing circuitry102, such as a last-determined heart position state. In either example, if processing circuitry102determines that the heart position state of heart18has changed (“YES” at192), then processing circuitry102modifies one or more of a cardiac therapy parameter value, a sensing parameter, or an electrode vector according to the modification122associated with the current heart position state of heart18in memory112(194). If processing circuitry102determines that the heart position state of heart18has not changed (“NO” at192), then processing circuitry102returns to (190), and again determines a current heart position state of heart18(e.g., an updated current heart position state of heart18) (194). In some examples, processing circuitry102may determine a current heart position state of heart18multiple times per respiratory cycle of patient8. For example, processing circuitry102may determine a current heart position of heart18during an inhalation phase and during an exhalation phase of a respiratory cycle. In other examples, processing circuitry102may determine a current heart position of heart18according to a different timing schedule, such as during a predetermined length of time at predetermined intervals. In some examples, processing circuitry102may determine a respiratory state of patient8more frequently than a posture of patient8. For example, a heart position state of heart18may change more frequently due to a respiration state of patient8than due to a posture state of patient8. In such examples, processing circuitry102may determine a heart position state of heart18each time processing circuitry102determines a current respiratory state of patient8, regardless of whether processing circuitry102has substantially concurrently determined a new posture state of patient8. In any such examples, a clinician may program intervals at which processing circuitry102may determine one or more of a posture, respiration state, or heart position state into memory112when the clinician programs other data into memory112(e.g., as described above with respect toFIG.2). FIG.6is a flow diagram illustrating an example technique for modifying an electrode vector based on determining that a heart position state of heart18has changed. In the example technique ofFIG.6, processing circuitry102may determine a current position state of heart18of patient8(200), such as described above with respect toFIG.5. Next, processing circuitry102determines whether the heart position state of heart18corresponds to one or more of an upright posture, inhalation, deep breathing, or any other posture and/or respiration state associated with heart18being positioned relatively low in the thorax of patient8(202). If processing circuitry102determines that the heart position state of heart18corresponds to one or more of an upright posture, inhalation, deep breathing, or any other posture and/or respiration state associated with heart18being positioned relatively low in the thorax of patient8(“YES” at202), then processing circuitry102modifies an electrode vector (e.g., a sensing vector, a pacing vector, or an anti-tachyarrhythmia vector) according to the modification122associated with the current heart position state of heart18in memory112(204). If processing circuitry102determines that the heart position state of heart18does not correspond to one or more of an upright posture, inhalation, deep breathing, or any other posture and/or respiration state associated with heart18being positioned relatively low in the thorax of patient8(“NO” at202), then processing circuitry102returns to (200) and again determines a current heart position state of heart18(e.g., an updated current heart position state of heart18), such as in the manner described above with respect toFIG.5. In this manner, processing circuitry102may enable system10to more accurately sense one or more aspects of a cardiac electrical signal and/or deliver more efficacious and/or more efficient cardiac therapy to heart18. In any such examples, improving sensing accuracy or efficacy and/or efficiency of cardiac therapy delivery advantageously may result in one or more of an improved clinical outcome of patient8or an improved longevity of a power source of system10. Although processing circuitry102of IMD12is described above as being configured to perform one or more of the steps of the techniques described with respect toFIGS.1-6, any steps of the techniques described herein may be performed by processing circuitry of the other devices. For example, processing circuitry of a remote computer located with a clinician (e.g., computing device170A), or of any other suitable implantable or external device or server, may be configured to perform one or more of the steps described as being performed by processing circuitry102of IMD12. Such other implantable or external devices may include, for example, an implantable or external monitoring device, or any other suitable device. Various aspects of the techniques may be implemented within one or more processors, including one or more microprocessors, DSPs, ASICs, FPGAs, or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components, embodied in programmers, such as physician or patient programmers, electrical stimulators, or other devices. The term “processor” or “processing circuitry” may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry or any other equivalent circuitry. In one or more examples, the functions described in this disclosure may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on, as one or more instructions or code, a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media forming a tangible, non-transitory medium. Instructions may be executed by one or more processors, such as one or more DSPs, ASICs, FPGAs, general purpose microprocessors, or other equivalent integrated or discrete logic circuitry. Accordingly, the terms “processor” or “processing circuitry” as used herein may refer to one or more of any of the foregoing structures or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules. Depiction of different features as modules or units is intended to highlight different functional aspects and does not necessarily imply that such modules or units must be realized by separate hardware or software components. Rather, functionality associated with one or more modules or units may be performed by separate hardware or software components, or integrated within common or separate hardware or software components. Also, the techniques could be fully implemented in one or more circuits or logic elements. The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including an IMD, an external programmer, a combination of an IMD and external programmer, an integrated circuit (IC) or a set of ICs, and/or discrete electrical circuitry, residing in an IMD and/or external programmer. Experimental Results This disclosure includes the following discussion which forms part of this disclosure. The following discussion may provide many details and examples consistent with this disclosure. As described further below, one or more studies and experiments were carried out to evaluate one or more aspects of examples of the disclosure. However, the disclosure is not limited by the studies and experiments. For example, the details and examples of the following discussion may quantify variation of cardiac signal sensing by an EV-ICD based on respiration and posture. The modeling in such examples may similarly be used quantify variation in pacing and or anti-tachyarrhythmia shock parameters desired in response to varying posture and respiration states to achieve therapeutic benefit with efficient use of power resources. The examples of the following may also further illustrate movement of the heart with changing posture and respiration state, as well as the effect of such movement on the position of the heart relative to the electrical field generated using more superior/distal electrode(s) for pacing and/or shock therapy. In some examples, an EV ICD uses a defibrillation lead placed outside the heart in the anterior mediastinal space. In that location, the electrodes may have some freedom of movement relative to the heart with changing posture. The degree of influence of this motion on electrograms acquired via electrodes in this novel implant location have not been systematically characterized. Studies were carried out to quantify the variation in sensed signals due to changes in posture and respiration. A first modeling study used sets of MRI scans acquired in various postures and respiratory states to derive anthropometric data quantifying organ motion and shape relative to a supine, end inhalation posture representative of the implant condition. Detailed data for critical anatomy, such as the heart and epicardial fat, was obtained from high resolution ex vivo MRI scans and fused with the lower resolution MRIs to create anatomies with appropriate levels of detail for accurate simulation. Matched sets of computational meshes were created, representing a subject in various poses, and then the ICD is “implanted” multiple times in matched positions across these postures. Epicardial potentials were separately estimated from body surface recordings and mapped onto the myocardial surface. FIG.7is a diagram illustrating four MRI scans (top) and corresponding models (bottom) showing the location of a patient's heart within the patient (e.g., within the thoracic cavity) at inhalation (INH) of the patient in a supine (SUP), lying down on left side (LD), lying down on right side (RD), and upright (UP) posture. As shown, the relative position of the patient's heart was different for each the different postures. In one patient. 20 mm cranial/caudal movement of the heart was observed during tidal breathing and 60 mm of movement was observed during deep breathing. The epicardial potential data was manually annotated with scoring windows identifying various types of beats such as normal sinus rhythm (NSR) or ventricular tachycardia (VT). An automated system computed the cardiac signals at the ICD's electrodes for more than 2000 datasets, scored them automatically and stored these results in a database for statistical analysis. From the motion analysis of the MRI image data it was found that the average cranial-caudal motion of the heart apex was 34 mm (range: 3 to 70 mm, N=9). An example of the predicted signal, with scoring windows, is shown in theFIG.8for four combinations of the supine (SUP), upright (UP), inhalation (INH) and exhalation (EXH) denoted by SUP-INH, SUP-EXH, UP-INH and UP-EXH. In this single example, the baseline to peak amplitude, in millivolts (mVpk) for an NSR complex ranges from 1.46 to 2.09 mVpk while a VT complex ranges from 0.75 to 2.22 mVpk. For both complexes the minimum amplitude is associated with the supine, exhalation posture (SUP-EXH) and the maximum amplitude is for the upright, inhalation posture (UP-INH). Results from the modelling were useful to assess both signal amplitude and postural stability. The results were also used to test guidelines for device and lead implant locations to assure adequate signal levels in all postures for successful arrhythmia detection. The inclusion of postural variation is essential for assuring ambulatory performance of an EV ICD with electrodes outside the heart. FIGS.9A and9Bare conceptual illustrations of MRI images of a patient's heart102in a supine posture (FIG.9A) and an upright posture (FIG.9B) during inhalation along with an EV ICD104. The shaded areas in bothFIGS.9A and9Bindicate areas of high current density from the delivery of defibrillation therapy using defibrillation vector D1, which includes a housing electrode in combination with at least one lead electrode. As shown, upon standing and during inhalation, heart102moved cadually as indicated by the arrow between the images. In some examples, this may result in the defibrillation vector D1becoming less effective during defibrillation. As such, for some cases, the defibrillation efficacy may be improved by sensing the upright posture and, in response to sensing the upright posture, disabling a vector (e.g., vector D1), increasing the defibrillation energy, or both. FIG.10is a graph illustrating DFT variation with posture and respiration for a variety of modelled patients. The stability graph compares defibrillation threshold (DFT) in the supine, inhalation case (X axis) with the value in all other postures (Y axis). For some patients (HB-F-004-A and HB-F-004-B) the DFT was stable versus posture while for others (HB-M-005-A and HB-M-005-B) it may vary widely. It was believed that patients with an unstable DFT may benefit from a device that increases shock energy when the patient is sensed to be in a posture that has been determined to be problematic. Various aspects of the disclosure have been described. These and other aspects are within the scope of the following claims and clauses. Clause 1. A method for controlling delivery of anti-tachyarrhythmia shock therapy by a medical device system comprising a plurality of electrodes for delivering the anti-tachyarrhythmia shock therapy, the method comprising storing, in a memory of the medical device system, a respective value for each of a plurality of anti-tachyarrhythmia shock therapy parameters and, in association with each of a plurality of heart position states, a respective modification of at least one of the anti-tachyarrhythmia shock therapy parameters; and by processing circuitry of the medical device system determining a current one of the plurality of heart position states of a patient; modifying the at least one anti-tachyarrhythmia shock therapy parameter value according to the modification associated with the current heart position state; and controlling the delivery of the anti-tachyarrhythmia shock therapy according to the modified at least one anti-tachyarrhythmia shock therapy parameter value. Clause 2. The method of clause 1, further comprising delivering, by therapy delivery circuitry of the medical device system, the anti-tachyarrhythmia shock therapy according to the modified at least one anti-tachyarrhythmia shock therapy parameter value. Clause 3. The method of clause 1 or 2, wherein the medical device system comprises a medical electrical lead comprising a proximal end coupled to an implantable cardioverter-defibrillator, and a distal portion including the at least one of the plurality of electrodes and implanted substantially within an anterior mediastinum of the patient. Clause 4. The method of any of clauses 1 to 3, wherein the plurality of heart position states comprises a plurality of postures of the patient. Clause 5. The method of any of clauses 1 to 4, wherein determining the current heart position state of the patient comprises determining at least one of a respiratory phase, a respiratory rate, or a respiratory depth of the patient. Clause 6. The method of any of clauses 1 to 5, wherein modifying the at least one anti-tachyarrhythmia shock therapy parameter value comprises modifying a tachyarrhythmia detection parameter. Clause 7. The method of clause 6, wherein modifying the tachyarrhythmia detection parameter comprises modifying a cardiac electrogram sensing amplitude threshold. Clause 8. The method of any of clauses 1 to 7, wherein modifying the at least one anti-tachyarrhythmia shock therapy parameter value comprises modifying an anti-tachyarrhythmia shock magnitude. Clause 9. The method of any of clauses 1 to 8, wherein the plurality of anti-tachyarrhythmia shock therapy parameters comprises a sensing vector comprising at least two of the plurality of electrodes and a shock vector comprising at least two of the plurality of electrodes, and wherein modifying the at least one anti-tachyarrhythmia shock therapy parameter value comprises modifying at least one of the sensing vector or the shock vector. Clause 10. The method of clause 9, wherein modifying at least one of the sensing vector or the shock vector comprises removing one of the at least two of the plurality of electrodes from the at least one of the sensing vector or the shock vector. Clause 11. The method of clause 10, wherein removing the one of the at least two of the plurality of electrodes comprises removing a most superior one of the at least two of the plurality of electrodes. Clause 12. The method of clause 11, wherein the distal portion of the medical electrical lead includes the at least two of the plurality of electrodes, and wherein removing a most superior one of the at least two of the plurality of electrodes comprises removing a most distal one of the at least two of the plurality of electrodes. Clause 13. The method of clause 11, wherein determining the heart position state of the patient comprises at least one of determining that the patient is in an upright posture; or determining that a respiratory state of the patient comprises at least one of an inhalation phase, a respiratory depth satisfying a respiratory depth threshold, or a respiratory rate satisfying a respiratory rate threshold. Clause 14. A medical device system for delivering anti-tachyarrhythmia shock therapy, the system comprising: a plurality of electrodes; a memory configured to store a respective value for each of a plurality of anti-tachyarrhythmia shock therapy parameters and, in association with each of a plurality of heart position states, a respective modification of at least one of the anti-tachyarrhythmia shock therapy parameters; and processing circuitry configured to: determine a current one of the plurality of heart position states of the patient; modify the at least one anti-tachyarrhythmia shock therapy parameter value according to the modification associated with the current heart position state; and control the delivery of the anti-tachyarrhythmia shock therapy via the electrodes according to the modified at least one anti-tachyarrhythmia shock therapy parameter value. Clause 15. The medical device system of clause 14, further comprising therapy delivery circuitry configured to deliver the anti-tachyarrhythmia shock therapy via the electrodes according to the modified at least one anti-tachyarrhythmia shock therapy parameter value. Clause 16. The medical device system of clause 14 or 15, further comprising: an implantable cardioverter-defibrillator; and a medical electrical lead comprising a proximal end coupled to the implantable cardioverter-defibrillator and a distal portion including the at least one of the plurality of electrodes configured for implantation substantially within an anterior mediastinum of the patient. Clause 17. The medical device system of any of clauses 14 to 16, wherein the plurality of heart position states comprises a plurality of postures. Clause 18. The medical device system of any of clauses 14 to 17, wherein the processing circuitry is configured to determine the current heart position state of the patient by at least determining at least one of a respiratory phase, a respiratory rate, or a respiratory depth of the patient. Clause 19. The medical device system of any of clauses 14 to 18, wherein the processing circuitry is configured to modify the at least one anti-tachyarrhythmia shock therapy parameter value by at least modifying a tachyarrhythmia detection parameter. Clause 20. The medical device system of clause 19, wherein the processing circuitry is configured to modify the tachyarrhythmia detection parameter by at least modifying a cardiac electrogram sensing amplitude threshold. Clause 21. The medical device system of any of clauses 14 to 20, wherein the processing circuitry is configured to modify the at least one anti-tachyarrhythmia shock therapy parameter value by at least modifying an anti-tachyarrhythmia shock magnitude. Clause 22. The medical device system of any of clauses 14 to 21, wherein the plurality of anti-tachyarrhythmia shock therapy parameters comprises a sensing vector comprising at least two of the plurality of electrodes and a shock vector comprising at least two of the plurality of electrodes, and wherein the processing circuitry is configured to modify the at least one anti-tachyarrhythmia shock therapy parameter value by at least modifying at least one of the sensing vector or the shock vector. Clause 23. The medical device system of clause 22, wherein the processing circuitry is configured to modify at least one of the sensing vector or the shock vector by at least removing one of the at least two of the plurality of electrodes from the at least one of the sensing vector or the shock vector. Clause 24. The medical device system of clause 23, wherein the processing circuitry is configured to remove the one of the at least two of the plurality of electrodes by at least removing a most superior one of the at least two of the plurality of electrodes. Clause 25. The medical device system of clause 24, wherein the distal portion of the medical electrical lead includes the at least two of the plurality of electrodes, and wherein the processing circuitry is configured to remove a most superior one of the at least two of the plurality of electrodes by at least removing a most distal one of the at least two of the plurality of electrodes. Clause 26. The medical device system of clause 24, wherein the processing circuitry is configured to determine the heart position state of the patient by at least one of: determining that the patient is in an upright posture; or determining that a respiratory state of the patient comprises at least one of an inhalation phase, a respiratory depth satisfying a respiratory depth threshold, or a respiratory rate satisfying a respiratory rate threshold. Clause 27. A method for controlling cardiac electrogram sensing or delivery of cardiac therapy by an implantable medical device system comprising a plurality of electrodes, the method comprising, by processing circuitry of the medical device system: determining a heart position state of the patient; modifying a vector comprising at least two of the plurality of electrodes based on the determined heart position state; and controlling the medical device system to at least sense a cardiac electrogram or deliver cardiac therapy via the modified vector comprising the at least two of the plurality of electrodes. Clause 28. The method of clause 27, further comprising delivering, by therapy delivery circuitry of the medical device system, the cardiac therapy via the modified vector comprising the at least two of the plurality of electrodes. Clause 29. The method of clause 27 or 28, wherein the medical device system comprises a medical electrical lead including a proximal end coupled to an implantable medical device, and a distal portion including the at least two of the plurality of electrodes and implanted substantially within an anterior mediastinum of the patient. Clause 30. The method of clause 29, wherein the implantable medical device comprises an implantable cardioverter-defibrillator and the plurality of electrodes comprises a plurality of electrodes for delivering anti-tachyarrhythmia shock therapy, wherein the cardiac therapy comprises the anti-tachyarrhythmia shock therapy, and wherein controlling the medical device system to deliver cardiac therapy via the modified vector comprising the at least two of the plurality of electrodes comprises controlling the medical device system to deliver the anti-tachyarrhythmia shock therapy. Clause 31. The method of clause 29, wherein the cardiac therapy comprises cardiac pacing, and wherein controlling the medical device system to deliver cardiac therapy via the modified vector comprising the at least two of the plurality of electrodes comprises controlling the medical device system to deliver the cardiac pacing. Clause 32. The method of any of clauses 27 to 31, wherein the plurality of heart position states comprises a plurality of postures of the patient. Clause 33. The method of any of clauses 27 to 32, wherein determining the current heart position state of the patient comprises determining at least one of a respiratory phase, a respiratory rate, or a respiratory depth of the patient. Clause 34. The method of any of clauses 27 to 32, wherein modifying the vector comprising at least two of the plurality of electrodes based on the determined heart position state comprises removing one of the at least two of the plurality of electrodes from the vector. Clause 35. The method of clause 34, wherein removing the one of the at least two of the plurality of electrodes from the vector comprises removing a most superior one of the at least two of the plurality of electrodes. Clause 36. The method of clause 35, wherein determining the heart position state of the patient comprises at least one of: determining that the patient is in an upright posture; or determining that a respiratory state of the patient comprises at least one of an inhalation phase, a respiratory depth satisfying a respiratory depth threshold, or a respiratory rate satisfying a respiratory rate threshold. Clause 37. The method of any of clauses 27-36, the method further comprising: storing, in a memory of the medical device system, a respective value for each of a plurality of cardiac therapy parameters and in association with each of a plurality of heart position states, a respective modification of at least one of the cardiac therapy parameters; and modifying the at least one cardiac therapy parameter value according to the modification associated with the heart position state of the patient, wherein controlling the medical device system to at least deliver the cardiac therapy via the modified vector comprising the at least two of the plurality of electrodes comprises controlling the delivery of the cardiac therapy according to the modified at least one cardiac therapy parameter value. Clause 38. The method of clause 37, wherein the modifying the at least one cardiac therapy parameter value comprises modifying a tachyarrhythmia detection parameter. Clause 39. The method of clause 38, wherein modifying the tachyarrhythmia detection parameter comprises a cardiac electrogram sensing amplitude threshold. Clause 40. The method of clause 37, wherein modifying the at least one cardiac therapy parameter value comprises modifying an anti-tachyarrhythmia shock magnitude. Clause 41. The method of clause 37, wherein modifying the at least one cardiac therapy parameter value comprises modifying an anti-tachyarrhythmia pacing parameter. Clause 42. A medical device system for controlling cardiac electrogram sensing or delivery of cardiac therapy, the system comprising, a plurality of electrodes; and processing circuitry configured to: determine a heart position state of the patient; modify a vector comprising at least two of the plurality of electrodes based on the determined heart position state; and control the medical device system to at least sense a cardiac electrogram or deliver cardiac therapy via the modified vector comprising the at least two of the plurality of electrodes. Clause 43. The medical device system of clause 42, further comprising delivering, by therapy delivery circuitry of the medical device system, the cardiac therapy via the modified vector comprising the at least two of the plurality of electrodes. Clause 44. The medical device system of clause 42 or 43, wherein the medical device system comprises a medical electrical lead including a proximal end coupled to an implantable medical device, and a distal portion including the at least two of the plurality of electrodes and implanted substantially within an anterior mediastinum of the patient. Clause 45. The medical device system of clause 44, wherein the implantable medical device comprises an implantable cardioverter-defibrillator and the plurality of electrodes comprises a plurality of electrodes for delivering anti-tachyarrhythmia shock therapy, wherein the cardiac therapy comprises the anti-tachyarrhythmia shock therapy, and wherein controlling the medical device system to deliver cardiac therapy via the modified vector comprising the at least two of the plurality of electrodes comprises controlling the medical device system to deliver the anti-tachyarrhythmia shock therapy. Clause 46. The medical device system of clause 44, wherein the plurality of electrodes comprises a plurality of electrodes for delivering cardiac pacing wherein the cardiac therapy comprises anti-arrhythmia pacing, and wherein controlling the medical device system to deliver cardiac therapy via the modified vector comprising the at least two of the plurality of electrodes comprises controlling the medical device system to deliver the anti-arrhythmia pacing. Clause 47. The medical device system of any of clauses 42 to 46, wherein the plurality of heart positions comprises a plurality of postures. Clause 48. The medical device system of any of clauses 42 to 47, wherein determining the current heart position state of the patient comprises determining at least one of a respiratory phase, a respiratory rate, or a respiratory depth of the patient. Clause 49. The medical device system of any of clauses 42 to 47, wherein modifying the vector comprising at least two of the plurality of electrodes based on the determined heart position state comprises removing one of the at least two of the plurality of electrodes from the vector. Clause 50. The medical device system of clause 49, wherein removing the one of the at least two of the plurality of electrodes from the vector comprises removing a most superior one of the at least two of the plurality of electrodes. Clause 51. The medical device system of clause 50, wherein determining the heart position state of the patient comprises at least one of: determining that the patient is in an upright posture; or determining that a respiratory state of the patient comprises at least one of an inhalation phase, a respiratory depth satisfying a respiratory depth threshold, or a respiratory rate satisfying a respiratory rate threshold. Clause 52. The medical device system of any of clauses 42-51, further comprising a memory, wherein the processing circuitry is further configured to: store, in the memory, a respective value for each of a plurality of cardiac therapy parameters and in association with each of a plurality of heart position states, a respective modification of at least one of the cardiac therapy parameters; and modify the at least one cardiac therapy parameter value according to the modification associated with the heart position state of the patient, wherein the processing circuitry is configured to control the implantable medical device to at least deliver the cardiac therapy via the modified vector comprising the at least two of the plurality of electrodes by at least controlling the delivery of the cardiac therapy according to the modified at least one cardiac therapy parameter value. Clause 53. The medical device system of clause 52, wherein the processing circuitry is configured to modify the at least one cardiac therapy parameter value by at least modifying a tachyarrhythmia detection parameter. Clause 54. The medical device system of clause 53, wherein the processing circuitry is configured to modify the tachyarrhythmia detection parameter by at least modifying a cardiac electrogram sensing amplitude threshold. Clause 55. The medical device system of clause 52, wherein the processing circuitry is configured to modify the at least one cardiac therapy parameter value by at least modifying an anti-tachyarrhythmia shock magnitude. Clause 56. The medical device system of clause 52, wherein the processing circuitry is configured to modify the at least one cardiac therapy parameter value by at least modifying an anti-tachyarrhythmia pacing parameter. Clause 57. A method comprising any method described herein, or any combination of the methods described herein. Clause 58. A method comprising any combination of the methods of clauses 1-13 and 27-41. Clause 59. A system comprising means for performing the method of any of clauses 1-13, 27-41, 57 or 58. Clause 60. A non-transitory computer-readable storage medium comprising instructions stored thereon that, when executed by processing circuitry cause the processing circuitry to perform the method of any of clauses 1-13, 27-41, 57 or 58.
114,348
11857799
DETAILED DESCRIPTION OF THE INVENTION A wearable selective biophoton reflector will now be described. In the following exemplary description, numerous specific details are set forth in order to provide a more thorough understanding of embodiments of the invention. It will be apparent, however, to an artisan of ordinary skill that the present invention may be practiced without incorporating all aspects of the specific details described herein. In other instances, specific features, quantities, or measurements well known to those of ordinary skill in the art have not been described in detail so as not to obscure the invention. Readers should note that although examples of the invention are set forth herein, the claims, and the full scope of any equivalents, are what define the metes and bounds of the invention. FIG.1Ashows an illustrative embodiment of the invention100, which in this example is worn by a human subject on the arm or wrist with a wristband or bracelet101to hold the device100against or near to the subject's skin102. As described below, the side of device100against the skin includes a transparent window that collects biophotons emitted from the skin under the device. In one or more embodiments, the device100may be of any shape and size, and it may be placed on any part or parts of the body. For example, without limitation, it may be worn as a pendant (as shown inFIG.1C) or collar, as an armband, as a band around the ankle or leg, as a headband, or it may be integrated into any article of clothing or accessory. Device100may not include or require any power source or connection to external power. It may be a passive device that collects, filters, and reflects biophotons emitted from the skin102of the subject's body. Benefits of the lack of power source or power connection include lighter weight, lower cost, higher reliability, and much longer longevity. FIG.1Bshows a high-level architectural diagram of device100illustrating the interaction of the device with the biophotons emitted from subject102. Subject102emits biophoton radiation of various wavelengths, including for example radiation111at one wavelength and radiation112at a different (shorter) wavelength. In this application, the device100is configured to reflect only a narrow band around wavelength112, to optimize for the health benefits of that particular wavelength. Radiation112emitted from the skin is therefore reflected as radiation113that is directed back towards the skin of subject102. Radiation111is not reflected. The specific wavelength or wavelengths selected for an embodiment of the invention may differ across applications. Illustrative wavelengths that may be selected for in one or more embodiments may include for example, without limitation, 550 nanometers, 630 nanometers, 632 nanometers, 660 nanometers, 694 nanometers, 810 nanometers, and 980 nanometers. The band of selected wavelengths around the desired wavelength may be for example in the range of 10 nanometers to 20 nanometers in one or more embodiments. As an example, a device configured for 630 nanometers and 810 nanometers may select wavelengths in the ranges of 620-640 and 800-820 nanometers; wavelengths outside these ranges may be blocked or substantially attenuated. One or more embodiments may select wavelengths with filters of any desired bandwidths around the desired center wavelengths. The reflected biophotons113may be absorbed by any of the cells114of the subject. For example, in one or more embodiments these biophotons may interact with mitochondria115to increase energy production in the cell, potentially providing health benefits. Radiation of wavelength 810 nanometers (and to some extent of 635 nanometers as well) may be absorbed by cytochrome c oxidase, which is a mitochondrial chromophore, as described in Gupta et. al. (referenced above in the Description of the Related Art). FIGS.1C and1Dshow another illustrative embodiment of the invention that may be functionally similar to the embodiment ofFIGS.1A and1B, but is worn as a pendant instead of on the wrist. Device100ahangs from a necklace or band101aaround the neck of subject102a. The device may be of any size and shape.FIG.1Dshows a high-level architectural diagram of device100aillustrating the interaction of the device with the biophotons emitted from subject102a. The components of this architectural diagram are similar to those of the device shown inFIG.1B: the device selectively reflects biophoton radiation112aof a desired wavelength, resulting in reflected biophotons113athat are directed back towards the skin of the subject102a; other wavelengths such as biophotons111aare not reflected. As with the device100ofFIGS.1A and1B, in one or more embodiments of pendant100areflected biophotons may for example be absorbed by mitochondria115aof cells114aof the subject, increasing energy production or producing other health benefits. FIG.2shows an exploded view of illustrative components of embodiment100. The shape and size of these components may vary across embodiments. Some embodiments may have only a subset of these components.FIG.2also shows how illustrative biophoton waves111,112, and212interact with these components. Components to the right of the figure are closer to the skin of subject102when the device100is worn. Embodiment100has a housing that contains or holds the other components; in this embodiment the housing has a front portion201aand a back cap201bthat is attached to the front portion. (In this discussion, the front of the device is the side closest to the subject's skin when work, and one component is behind another component if it is further from the subject's skin.) Housing parts201aand201bmay be for example plastic and may be 3D printed. A clear window202is at the front (closest to the skin of subject102); this window may protect the other components and may pass the biophoton wavelengths of interest with minimal attenuation. An illustrative material that may be used in one or more embodiments for the window202is Gorilla Glass® of thickness 1.1 mm, which transmits wavelengths between 350 nanometers and 2200 nanometers. Behind window202is a polarizing filter203; this polarizer may or may not be present in one or more embodiments. Polarizer203may be for example a polarizing film that is coupled to the front or back of clear window202, or to the front or back of filter204(described below). The polarizer, when present, selects for waves of a particular polarity. For example, waves111and112, which vibrate in the plane of the page of the figure, may be passed through polarizer203unchanged; wave212, which vibrates in a plane orthogonal to the plane of the page, may be blocked by polarizer203. In some applications selecting for biophoton waves of a particular polarity may enhance effectiveness of the device. Behind polarizer203is a filter204that may select for specific wavelengths or wavelength ranges. (In one or more embodiments, the filter204may be in front of polarizer203instead of behind it as shown inFIG.2; in either case incoming light is both polarized and filtered.) In this example, filter204blocks wave111, but passes wave112through the filter. An illustrative filter that may be used in one or more embodiments is for example Edmunds Optics filter #67-916, with Central Wavelength (CWL) of 810 nanometers, and a bandwidth (FWHM—full wave half maximum) of 10 nanometers. This filter may be appropriate when 810 nanometers is the desired wavelength to reflect; other applications may use different filters that select for other wavelengths. One or more embodiments may combine multiple filters to obtain a set of desired wavelengths. Behind filter204is a mirror205. This mirror reflects the waves that have passed through polarizer203and filter204back towards the subject's skin. In the example shown inFIG.2, wave112is reflected to wave113that returns to the subject through the other components. In one or more embodiments, mirror205may be for example a parabolic mirror that reflects incoming waves to a common direction parallel to the central axis of the device, ensuring that waves emitted at varying angles from the skin are reflected back into the body. In one or more embodiments the parabolic mirror may be approximately parabolic; for example, it may be spherical. Mirror205may be a gold-coated parabolic mirror in one or more embodiments. One or more embodiments may use for example Edmunds Optics protected gold spherical mirror #32-813. In one or more embodiments the mirror may reflect a broad spectrum of wavelengths that includes the wavelengths selected by the filter. For example, the Edmunds Optics mirror described above reflects at least wavelengths in the range of 700 nanometers to 10,000 nanometers. One or more embodiments of the invention may use multiple filters to select multiple wavelengths of biophotons that are reflected towards the user's body. This approach may be valuable when the desired beneficial effects can be generated or enhanced with more than one band of wavelengths.FIG.3shows an illustrative embodiment of a device100bwith multiple filters. The device is shown in an exploded view similar to the view of device100inFIG.2. The window202, polarizer203(if used), and back cap201bmay be identical to or similar to the equivalent components in device100ofFIG.2. Instead of a single filter like filter204of device100, device100bhas three filters204a,204b, and204cthat select different associated wavelengths311a,311b, and311c, respectively. Illustrative wavelengths may be for example 550 nanometers for wavelength311a,694 nanometers for wavelength311b, and 632 nanometers for wavelength311c. Illustrative filters that may be used in one or more embodiments may include for example: for filter204a, Edmunds Optics filter #65-644, CWL 550 nm, FWHM 10 nm, diameter 12.5 mm; for filter204b, Edmunds Optics filter #65-660, CWL 694 nm, FWHM 10 nm, diameter 12.5 mm; and for filter204c, Edmunds Optics filter #65-711, CWL 632 nm, FWHM 10 nm, diameter 25 mm. The front cap201chas three openings301a,301b, and301cthat correspond to filters204a,204b, and204c, respectively. Mirror205bmay be identical to or similar to mirror205ofFIG.2; alternatively, in one or more embodiments mirror205bmay be for example an aluminum coated concave mirror such as Edmund Optics mirror #43-471, which reflects wavelengths between 400 nanometers and 2000 nanometers. The arrangement, shapes, sizes, and number of filters shown inFIG.3are illustrative; one or more embodiments may use any number of filters in any configuration to select for any desired combination of wavelengths. In one or more embodiments, it may be beneficial to use the wearable selective biophoton reflector with one or more oral supplements that elevate one or more of Glutathione and Nitric Oxide to further enhance mitochondrial function. However, the biophoton reflector may be used with or without oral supplements. While the invention herein disclosed has been described by means of specific embodiments and applications thereof, numerous modifications and variations could be made thereto by those skilled in the art without departing from the scope of the invention set forth in the claims.
11,243
11857800
DETAILED DESCRIPTION The detailed descriptions set forth below in connection with the appended drawings are intended as a description of embodiments of the invention, and is not intended to represent the only forms in which the present invention may be constructed and/or utilized. The descriptions set forth the structure and the sequence of steps for constructing and operating the invention in connection with the illustrated embodiments. It is to be understood, however, that the same or equivalent structures and steps may be accomplished by different embodiments that are also intended to be encompassed within the spirit and scope of the invention. The present system in one or more embodiments provides a photobiomodulation therapy garment. A photobiomodulation therapy garment disclosed herein comprising a garment configured to be donned by a user atop a skin surface which integrates one or more photobiomodulation units which in conjunction with a controller are configured to administer a photobiomodulation therapy. A photobiomodulation unit includes one or more near-infrared light sources, one or more sensors, and optionally one or more stimulators in electrical connection with a connection terminal. The connection terminal is also configured to operationally receive the controller in an manner that establishes an electrical connection. Each of the one or more near-infrared light sources disclosed herein is configured to emit near-infrared light at a wavelength between 600 nm to 1600 nm and at a predetermined dosimetry and duration. A controller disclosed herein has a processor and memory and is configured to control the operational parameters of the near-infrared light source. During operation, a photobiomodulation unit is configured to emit near-infrared light to one or more regions of a skin surface of a user. In some embodiments, and as shown inFIGS.1-16, a photobiomodulation therapy garment20comprises a garment30, a photobiomodulation unit100, and a controller200. In some embodiments, and as shown inFIGS.1-7, photobiomodulation therapy garment20can be configured as a photobiomodulation therapy headband22for the purpose of treating specific regions of a head H of a person P using a transcranial photobiomodulation therapy. In this configuration, person P dons photobiomodulation therapy headband22by wrapping snugly around head region H so that one or more near-infrared light sources disclosed herein integrated into photobiomodulation therapy headband22are positioned overtop and/or directed to a region of interest of skin surface S for delivering therapeutic levels of near-infrared light to a region of interest. In some embodiments, positioning of photobiomodulation therapy headband22as shown inFIG.1, situates the one or more near-infrared light sources disclosed herein atop a frontal bone region of the skull of person P, approximately and/or substantially centered on midsagittal plane320(e.g., centered on the nose) for the purpose of administering near-infrared light for treating a disorder through the skull to a brain region of person P. Further, in these embodiments, photobiomodulation therapy headband22is positioned such that one the near-infrared light sources disclosed herein are approximately positioned above superciliary arch region322(i.e., the brown ridge above the eye sockets); and generally beneath the hairline (although the hairline varies somewhat depending on the individual). At minimum, in one or more embodiments, at least one the near-infrared light sources disclosed herein integrated within photobiomodulation therapy headband22should be positioned above superciliary arch region322or at minimum above the eye sockets, to minimize exposure of the eyes to the near-infrared light. In some embodiments, and as shown inFIG.1, photobiomodulation therapy headband22is positioned to cover defined regions of interest on skin surface S comprising one or more or all of a Fp1 site300, a Fpz site302, a Fp2 site304, a F3 site306, a Fz site308, and a F4 site310, when photobiomodulation therapy headband22is donned on head H and properly positioned. Although, photobiomodulation therapy garment20is illustrated as photobiomodulation therapy headband22inFIGS.1-16, a photobiomodulation therapy garment disclosed herein can be constructed to be donned on a variety of body portions. In some embodiments, a photobiomodulation therapy garment disclosed herein can be configured to wrap about or conform to a wide variety of body parts, with the capability to be moved from one region of interest to another region of interest on the body. In some embodiments, a photobiomodulation therapy garment disclosed herein can be configured specifically to fit a particular body part, such as, e.g., as a head covering, a visor, a neck wrap, a shoulder wrap, a wrist wrap, or an abdominal wrap. In some embodiments, a photobiomodulation therapy garment disclosed herein can be configured to be fitted to a wide variety of body parts of an individual, with the capability to be worn by the individual, such as, e.g., a hat, a shirt, a pants, or an undergarment. A photobiomodulation therapy garment20comprises a garment. Garment can be made flexible, semirigid, or rigid and is constructed to be comfortable to the user's body and configured to behave much like an item of clothing or other donned fashion accessory. In some embodiments, a garment disclosed herein is a fabric material made through weaving, knitting, spreading, felting, stitching, crocheting or bonding. In some embodiments, garment is composed of multiple layers of fabric material. For example, in some embodiments, photobiomodulation therapy garment comprises an outer fabric sheet and an inner fabric sheet. Outer fabric sheet is sized and dimensioned to serve as base for mounting one or more photobiomodulation units and a controller disclosed herein whereas inner fabric assembly is sized and dimensioned to at least cover the one or more photobiomodulation units. For example, in some embodiments, and referring toFIGS.2-4, &7,8,15, &16photobiomodulation therapy headband22comprises garment30including an outer fabric sheet40and an inner fabric sheet70and a photobiomodulation unit100sandwiched between outer fabric sheet40and inner fabric sheet70. Outer fabric sheet40can be made of a wide variety of natural or synthetic textiles, generally chosen for aesthetic and/or protective qualities. Inner fabric sheet70is configured to lie against skin surface S, and can be made of natural or synthetic textile or other material which is comfortable against skin surface S, such as space cotton or the like. As shown inFIGS.2-7, top and bottom portions of outer fabric sheet40and inner fabric sheet70are affixed to one another to form a top edge50and a bottom edge52of garment30in a manner that encloses photobiomodulation unit100therewithin. As shown inFIGS.2,4-6,8,15, &16outer fabric sheet40of garment30also comprises a right head strap64extends longitudinally from the right portion54of garment30and a left head strap66extends longitudinally from left portion56of garment30. Slide buckle60permits length adjustment of right head strap64and slide buckle62is connected to right head strap64and held to right head strap64via a loop created by slide buckle60. Slide buckle62is configured to receive the free end of left head strap66, where left head strap66can include hook and loop mating portions to complete the connection. This head strap arrangement permits easy adjustment of photobiomodulation therapy headband22and secure attachment to head H. As shown inFIG.8, outer fabric sheet40of garment30comprises a terminal rail mount opening58sized and dimensioned to receive a terminal rail of a connection terminal disclosed herein in a manner that enables proper engagement of controller200to the terminal rail. Referring toFIGS.2,4-6,8,15, &16, outer fabric sheet40of garment30also comprises controller strap68extends from a left portion56of garment30. Controller strap68is configured to be wrapped about controller200once controller200is operationally engaged to photobiomodulation unit100in order to securely hold controller200against outer fabric sheet40. Controller strap68has a first end securely affixed to garment30and the second end opposite the first end that can loop around attached controller200thereby reversibly securing controller200to garment30using, e.g., a hook and loop fastener, a buckle or snaps. Controller strap68can be composed of a non-elastic or elastic material. As best seen inFIGS.8,15&16, inner fabric sheet70of garment30comprises one or more near infrared light source openings76and one or more sensor openings78. Each of the one or more near infrared light source openings76is a cutout positioned on inner fabric sheet70so that when assembled each opening76is aligned with an infrared light source disclosed herein in a manner that permits light from the infrared light source to emit through the infrared light source opening76. Similarly, each of the one or more sensor openings78is a cutout positioned on inner fabric sheet70so that when assembled each opening78is aligned with a sensor disclosed herein in a manner that permits the sensor to properly function and collect information from a user through the sensor opening78. In some embodiments, and referring toFIGS.2-4&8, inner fabric sheet70can include one or more sensor covers79. Each sensor cover79is position over and protects each of the one or more sensors disclosed herein mounted on photobiomodulation unit100. In addition, each of the one or more sensor covers79is configured to be in contact or is placed in close proximity to skin surface S when photobiomodulation therapy garment20is donned. Each sensor cover79can be attached to inner fabric sheet70, or photobiomodulation unit100, and/or sandwiched therebetween. Each sensor cover79is constructed of a thin sheet of PVC in this example embodiment, which permits the one or more sensors located thereunder to interact with skin surface S to measure bodily functions, such as one or more of a temperature, a heart rate, a blood oxygen level, and other measurable functions. Further, each sensor cover79provides a visible reference to assist a user in properly orientating and donning photobiomodulation therapy garment20. For example, when placing photobiomodulation therapy headband22about head H, the one or more sensor covers79can be manually aligned with the nose, placing sensor cover79substantially on top of sagittal plane320. A photobiomodulation therapy garment20also includes a photobiomodulation unit. A photobiomodulation unit includes a connection terminal, one or more near-infrared light sources, one or more sensors and is configured to establish electronic communication with controller200. In some embodiments, and referring toFIGS.8,9&11, a photobiomodulation unit100comprises a flexible printed circuit board assembly110that provides a flexible substrate housing the electrical circuitry which establishes electronic communication between a connection terminal160and one or more near-infrared light sources170, such as, e.g., an infrared light, low-level laser, and/or light emitted diode (LED), one or more sensors180, and optionally one or more stimulators194. Connection terminal160(generally rigid or semirigid) includes an electronic circuity connector162mounted thereon, where electronic circuity connector162is configured to provide electronic communication between controller200and flexible printed circuit board assembly110. In these embodiments, flexible printed circuit board assembly110is configured to provide flexibility and comfort to the wearer. For example, since photobiomodulation therapy headband22must closely match the contours of the forehead, flexible printed circuit board assembly110is designed with strategic cutouts to permit maximum flexibility and comfort. In some embodiments, and as shown inFIGS.8&10, flexible printed circuit board assembly110comprises a heat-dissipating material102on the flexible substrate on the side opposite the electrical circuitry that dispels heat generating by flexible printed circuit board assembly110during operation of photobiomodulation therapy garment20. In some embodiments, and referring toFIGS.9&11, flexible printed circuit board assembly110is a thin, flat substrate that includes a first surface and a second surface opposite the first surface and is configured as a main strip112which extends from connection terminal160and trifurcates at a root113into a first strip114, a second strip116, and a sensor strip140extending from the middle. First strip114and second strip116are connected distally by a connecting portion117and all define the bounds of cutout118. First strip114and second strip116contain the electronic circuitry needed to establish electronic communication between each near-infrared light source170operationally mounted on first strip114or second strip116and connection terminal160. In some embodiments, first strip114and second strip116each include a series of tabs laterally extending outward therefrom, for mounting thereon near-infrared light sources disclosed herein. For example, as shown inFIGS.9&11, first strip114includes a first mounting portion120, a second mounting portion122, and a third mounting portion124, with first mounting portion120separated from second mounting portion122by a first cutout121therebetween, and third mounting portion124separated from second mounting portion122by a second cutout123therebetween. Similarly, second strip116includes a first mounting portion130, a second mounting portion132, and a third mounting portion134, with first mounting portion130separated from second mounting portion132by a first cutout131therebetween, and third mounting portion134separated from second mounting portion132by a second cutout133therebetween. First, second, third, mounting portions120,122,124of first strip114and first, second, third, mounting portions130,132,134of second strip116act like gores to permit independent flexible bending of flexible printed circuit board assembly110. Such flexible bending enables flexible printed circuit board assembly110to easily conform to the contours of one or more regions of interest of skin region S and places each of the one or more near-infrared light sources disclosed herein in close proximity to skin surface S with minimal or no gap. In some embodiments, and referring toFIGS.9&11, sensor strip140includes a sensor mounting portion142and a free end144. Sensor strip140extends from root113into cutout118in a manner where cutout118provides clearance of sensor strip140from first strip114and second strip116such that sensor strip140is disconnected from first strip114and second strip116except at root113. Sensor strip140contains the electronic circuitry needed to establish electronic communication between each sensor180operationally mounted on sensor strip140and connection terminal160. Sensor strip140is relatively thin and elongated to permit bending and slight movement of sensor strip140relative to the remainder of flexible printed circuit board assembly110, which is additionally permitted due to a sensor opening224provided by a double-sided tape220(seeFIGS.8,15, &16), which permits easy bending and fitting about head H, with little or no kinks in flexible printed circuit board assembly110. In some embodiments, and referring toFIGS.5,8,9,11,15, &16, integrally mounted to one end of flexible printed circuit board assembly110is connection terminal160. Connection terminal160comprises electronic circuity connector162and a terminal rail mount164. Electronic circuity connector162of connection terminal160is located on the same surface of flexible printed circuit board assembly110where one or more infrared light sources170, one or more sensors180, and one or more stimulators194are mounted and contains the electrical circuitry used to establish electronic communication with one or more infrared light sources170, one or more sensors180, and one or more stimulators194. Terminal rail mount164of connection terminal160is located on flexible printed circuit board assembly110on the surface opposite of electronic circuity connector162. Terminal rail mount164includes a plurality of contacts166. Terminal rail mount164is configured to receive controller200and establish electronic communication between photobiomodulation unit100and controller200which has corresponding contacts that mate with contacts166when connected. To permit quick connection and disconnection, controller200and terminal rail mount164include sliding joinery (e.g., dovetail or tongue and groove like joinery) to capture controller200within terminal rail mount164and force electrical contact between contacts166of terminal rail mount164and corresponding contacts protruding though controller200. After sliding controller200into terminal rail mount164, controller strap68is wrapped over controller200and fastened to inside portion44of outer fabric sheet40by a releasable connection, such as hook and loop. In some embodiments, and referring toFIGS.15&16, photobiomodulation unit100comprises a liquid wire circuit assembly150that provides electronic communication between connection terminal160and one or more near-infrared light sources170, such as, e.g., an infrared light, low-level laser, and/or light emitted diode (LED), one or more sensors180, and optionally one or more stimulators194. A liquid wire comprises a type of metal that remains in the liquid phase at room temperature which is enclosed in a flexible tubing. Owing to its liquid phase nature, liquid metal can make good contact with objects in any shape and can maintain excellent electrical properties upon the deformation of the substrate or the covering film. Non-limiting examples of liquid metal include gallium and alloys such as eutectic gallium—indium. Liquid wire circuit assembly150includes connection terminal160with electronic circuity connector162mounted thereon, where electronic circuity connector162is configured to provide electrical communication between one or more liquid wire tubes of circuit assembly150and connection terminal160. In these embodiments, liquid wire circuit assembly150is configured to provide flexibility and comfort to the wearer. For example, since photobiomodulation therapy headband22must closely match the contours of the forehead, liquid wire circuit assembly150is designed with strategic cutouts to permit maximum flexibility and comfort. In some embodiments, and referring toFIGS.15&16, liquid wire circuit assembly150comprises a main liquid wire tube152which extends from connection terminal160and trifurcates at a root153into a first liquid wire tube154, a second liquid wire tube156, and a sensor liquid wire tube158extending from the middle affixed directly to an inner surface of outer fabric sheet40. First and second liquid wire tubes154,156contain the electronic circuitry needed to establish electrical connection between each near-infrared light source170operationally mounted to first or second liquid wire tubes154,156and connection terminal160which, in turn provides electrical connection to controller200. Sensor liquid wire tube158contains the electronic circuitry needed to establish electrical communication between each sensor180and/or each stimulator194operationally mounted to sensor liquid wire tube158and connection terminal160which, in turn provides electrical connection to controller200. The affixing of first and second liquid wire tubes154,156and sensor liquid wire tube158directly to an inner surface of outer fabric sheet40enables liquid wire circuit assembly150to easily conform to the contours of one or more regions of interest of skin region S and places each of the one or more near-infrared light sources disclosed herein in close proximity to skin surface S with minimal or no gap. Although not shown, liquid wire circuit assembly150can be configured in an arrangement similar to the arrangements shown for flexible printed circuit board assembly110ofFIGS.12-14. Referring toFIGS.9-15, photobiomodulation unit100also comprises one or more near-infrared light source170each being configured to emit near infrared light in a wavelength range of 700 nm to 1600 nm. In some embodiments, near-infrared light source170emits light having a wavelength of, e.g., about 700 nm, about 750 nm, about 800 nm, about 900 nm, about 1000 nm, about 1100 nm, about 1200 nm, about 1.300 nm, about 1400 nm, or about 1500 nm. In some embodiments, near-infrared light source170emits light having a wavelength of, e.g., at least 700 nm, at least 750 nm, at least 800 nm, at least 850 nm, at least 900 nm, at least 1000 nm, at least 1100 nm, at least 1200 nm, at least 1.300 nm, at least 1400 nm, or at least 1500 nm. In some embodiments, near-infrared light source170emits light having a wavelength of, e.g., at most 700 nm, at most 750 nm, at most 800 nm, at most 850 nm, at most 900 nm, at most 1000 nm, at most 1100 nm, at most 1200 nm, at most 1.300 nm, at most 1400 nm, or at most 1500 nm. In some embodiments, near-infrared light source170emits light having a wavelength of, e.g., about 700 nm to about 750 nm, about 700 nm to about 800 nm, about 700 nm to about 900 nm, about 700 nm to about 1000 nm, about 700 nm to about 1100 nm, about 700 nm to about 1200 nm, about 700 nm to about 1300 nm, about 700 nm to about 1400 nm, about 700 nm to about 1500 nm, about 750 nm to about 800 nm, about 750 nm to about 850 nm, about 750 nm to about 900 nm, about 750 nm to about 1000 nm, about 750 nm to about 1100 nm, about 750 nm to about 1200 nm, about 750 nm to about 1300 nm, about 750 nm to about 1400 nm, about 750 nm to about 1500 nm, about 800 nm to about 850 nm, about 800 nm to about 900 nm, about 800 nm to about 1000 nm, about 800 nm to about 1100 nm, about 800 nm to about 1200 nm, about 800 nm to about 1300 nm, about 800 nm to about 1400 nm, about 800 nm to about 1500 nm, about 850 nm to about 900 nm, about 850 nm to about 1000 nm, about 850 nm to about 1100 nm, about 850 nm to about 1200 nm, about 850 nm to about 1300 nm, about 850 nm to about 1400 nm, about 850 nm to about 1500 nm, about 900 nm to about 1000 nm, about 900 nm to about 1100 nm, about 900 nm to about 1200 nm, about 900 nm to about 1300 nm, about 900 nm to about 1400 nm, about 900 nm to about 1500 nm, about 1000 nm to about 1100 nm, about 1000 nm to about 1200 nm, about 1000 nm to about 1300 nm, about 1000 nm to about 1400 nm, about 1000 nm to about 1500 nm, about 1100 nm to about 1200 nm, about 1100 nm to about 1300 nm, about 1100 nm to about 1400 nm, about 1100 nm to about 1500 nm, about 1200 nm to about 1300 nm, about 1200 nm to about 1400 nm, about 1200 nm to about 1500 nm, about 1300 nm to about 1400 nm, about 1300 nm to about 1500 nm, or about 1400 nm to about 1500 nm. In some embodiments, one or more near-infrared light source170are each configured to emit near infrared light in a pulse wave (or frequency) range of about 1 Hz to about 100 Hz. In some embodiments, near-infrared light source170emits light having a pulse wave of, e.g., about 10 Hz, about 20 Hz, about 30 Hz, about 40 Hz, about 50 Hz, about 60 Hz, about 70 Hz, about 80 Hz, about 90 Hz, or about 100 Hz. In some embodiments, near-infrared light source170emits light having a pulse wave of, e.g., at least 10 Hz, at least 20 Hz, at least 30 Hz, at least 40 Hz, at least 50 Hz, at least 60 Hz, at least 70 Hz, at least 80 Hz, at least 90 Hz, or at least 100 Hz. In some embodiments, near-infrared light source170emits light having a pulse wave of, e.g., at most 10 Hz, at most 20 Hz, at most 30 Hz, at most 40 Hz, at most 50 Hz, at most 60 Hz, at most 70 Hz, at most 80 Hz, at most 90 Hz, or at most 100 Hz. In some embodiments, near-infrared light source170emits light having a pulse wave of, e.g., about 10 Hz to about 20 Hz, about 10 Hz to about 30 Hz, about 10 Hz to about 40 Hz, about 10 Hz to about 50 Hz, about 10 Hz to about 60 Hz, about 10 Hz to about 70 Hz, about 10 Hz to about 80 Hz, about 10 Hz to about 90 Hz, about 10 Hz to about 100 Hz, about 20 Hz to about 30 Hz, about 20 Hz to about 40 Hz, about 20 Hz to about 50 Hz, about 20 Hz to about 60 Hz, about 20 Hz to about 70 Hz, about 20 Hz to about 80 Hz, about 20 Hz to about 90 Hz, about 20 Hz to about 100 Hz, about 30 Hz to about 40 Hz, about 30 Hz to about 50 Hz, about 30 Hz to about 60 Hz, about 30 Hz to about 70 Hz, about 30 Hz to about 80 Hz, about 30 Hz to about 90 Hz, about 30 Hz to about 100 Hz, about 40 Hz to about 50 Hz, about 40 Hz to about 60 Hz, about 40 Hz to about 70 Hz, about 40 Hz to about 80 Hz, about 40 Hz to about 90 Hz, about 40 Hz to about 100 Hz, about 50 Hz to about 60 Hz, about 50 Hz to about 70 Hz, about 50 Hz to about 80 Hz, about 50 Hz to about 90 Hz, about 50 Hz to about 100 Hz, about 60 Hz to about 70 Hz, about 60 Hz to about 80 Hz, about 60 Hz to about 90 Hz, about 60 Hz to about 100 Hz, about 70 Hz to about 80 Hz, about 70 Hz to about 90 Hz, about 70 Hz to about 100 Hz, about 80 Hz to about 90 Hz, about 80 Hz to about 100 Hz, or about 90 Hz to about 100 Hz. In some embodiments, one or more near-infrared light source170are each configured to emit near infrared light in a pulse wave (or frequency) range of about 100 Hz to about 1000 Hz. In some embodiments, near-infrared light source170emits light having a pulse wave of, e.g., about 100 Hz, about 200 Hz, about 300 Hz, about 400 Hz, about 500 Hz, about 600 Hz, about 700 Hz, about 800 Hz, about 900 Hz, or about 1000 Hz. In some embodiments, near-infrared light source170emits light having a pulse wave of, e.g., at least 100 Hz, at least 200 Hz, at least 300 Hz, at least 400 Hz, at least 500 Hz, at least 600 Hz, at least 700 Hz, at least 800 Hz, at least 900 Hz, or at least 1000 Hz. In some embodiments, near-infrared light source170emits light having a pulse wave of, e.g., at most 100 Hz, at most 200 Hz, at most 300 Hz, at most 400 Hz, at most 500 Hz, at most 600 Hz, at most 700 Hz, at most 800 Hz, at most 900 Hz, or at most 1000 Hz. In some embodiments, near-infrared light source170emits light having a pulse wave of, e.g., about 100 Hz to about 200 Hz, about 100 Hz to about 300 Hz, about 100 Hz to about 400 Hz, about 100 Hz to about 500 Hz, about 100 Hz to about 600 Hz, about 100 Hz to about 700 Hz, about 100 Hz to about 800 Hz, about 100 Hz to about 900 Hz, about 100 Hz to about 1000 Hz, about 200 Hz to about 300 Hz, about 200 Hz to about 400 Hz, about 200 Hz to about 500 Hz, about 200 Hz to about 600 Hz, about 200 Hz to about 700 Hz, about 200 Hz to about 800 Hz, about 200 Hz to about 900 Hz, about 200 Hz to about 1000 Hz, about 300 Hz to about 400 Hz, about 300 Hz to about 500 Hz, about 300 Hz to about 600 Hz, about 300 Hz to about 700 Hz, about 300 Hz to about 800 Hz, about 300 Hz to about 900 Hz, about 300 Hz to about 1000 Hz, about 400 Hz to about 500 Hz, about 400 Hz to about 600 Hz, about 400 Hz to about 700 Hz, about 400 Hz to about 800 Hz, about 400 Hz to about 900 Hz, about 400 Hz to about 1000 Hz, about 500 Hz to about 600 Hz, about 500 Hz to about 700 Hz, about 500 Hz to about 800 Hz, about 500 Hz to about 900 Hz, about 500 Hz to about 1000 Hz, about 600 Hz to about 700 Hz, about 600 Hz to about 800 Hz, about 600 Hz to about 900 Hz, about 600 Hz to about 1000 Hz, about 700 Hz to about 800 Hz, about 700 Hz to about 900 Hz, about 700 Hz to about 1000 Hz, about 800 Hz to about 900 Hz, about 800 Hz to about 1000 Hz, or about 900 Hz to about 1000 Hz. In some embodiments, one or more near-infrared light source170are each configured to emit near infrared light in a pulse wave (or frequency) range of about 1000 Hz to about 5000 Hz. In some embodiments, near-infrared light source170emits light having a pulse wave of, e.g., about 1000 Hz, about 2000 Hz, about 3000 Hz, about 4000 Hz, or about 5000 Hz. In some embodiments, near-infrared light source170emits light having a pulse wave of, e.g., at least 1000 Hz, at least 2000 Hz, at least 3000 Hz, at least 4000 Hz, or at least 5000 Hz. In some embodiments, near-infrared light source170emits light having a pulse wave of, e.g., at most 1000 Hz, at most 2000 Hz, at most 3000 Hz, at most 4000 Hz, or at most 5000 Hz. In some embodiments, near-infrared light source170emits light having a pulse wave of, e.g., about 1000 Hz to about 2000 Hz, about 1000 Hz to about 3000 Hz, about 1000 Hz to about 4000 Hz, about 1000 Hz to about 5000 Hz, about 2000 Hz to about 3000 Hz, about 2000 Hz to about 4000 Hz, about 2000 Hz to about 5000 Hz, about 3000 Hz to about 4000 Hz, about 3000 Hz to about 5000 Hz, or about 4000 Hz to about 5000 Hz. In some embodiments, one or more near-infrared light source170are each configured to emit near infrared light in a radiant energy range of about 100 J to about 1100 J. In some embodiments, near-infrared light source170has a radiant energy of, e.g., about 100 J, about 200 J, about 300 J, about 400 J, about 500 J, about 600 J, about 700 J, about 800 J, about 900 J, about 1000 J, or about 1100 J. In some embodiments, near-infrared light source170has a radiant energy of, e.g., at least 100 J, at least 200 J, at least 300 J, at least 400 J, at least 500 J, at least 600 J, at least 700 J, at least 800 J, at least 900 J, at least 1000 J, or at least 1100 J. In some embodiments, near-infrared light source170has a radiant energy of, e.g., at most 100 J, at most 200 J, at most 300 J, at most 400 J, at most 500 J, at most 600 J, at most 700 J, at most 800 J, at most 900 J, at most 1000 J, or at most 1100 J. In some embodiments, near-infrared light source170has a radiant energy of, e.g., about 100 J to about 200 J, about 100 J to about 300 J, about 100 J to about 400 J, about 100 J to about 500 J, about 100 J to about 600 J, about 100 J to about 700 J, about 100 J to about 800 J, about 100 J to about 900 J, about 100 J to about 1000 J, about 100 J to about 1100 J, about 200 J to about 300 J, about 200 J to about 400 J, about 200 J to about 500 J, about 200 J to about 600 J, about 200 J to about 700 J, about 200 J to about 800 J, about 200 J to about 900 J, about 200 J to about 1000 J, about 200 J to about 1100 J, about 300 J to about 400 J, about 300 J to about 500 J, about 300 J to about 600 J, about 300 J to about 700 J, about 300 J to about 800 J, about 300 J to about 900 J, about 300 J to about 1000 J, about 300 J to about 1100 J, about 400 J to about 500 J, about 400 J to about 600 J, about 400 J to about 700 J, about 400 J to about 800 J, about 400 J to about 900 J, about 400 J to about 1000 J, about 400 J to about 1100 J, about 500 J to about 600 J, about 500 J to about 700 J, about 500 J to about 800 J, about 500 J to about 900 J, about 500 J to about 1000 J, about 500 J to about 1100 J, about 600 J to about 700 J, about 600 J to about 800 J, about 600 J to about 900 J, about 600 J to about 1000 J, about 600 J to about 1100 J, about 700 J to about 800 J, about 700 J to about 900 J, about 700 J to about 1000 J, about 700 J to about 1100 J, about 800 J to about 900 J, about 800 J to about 1000 J, about 800 J to about 1100 J, about 900 J to about 1000 J, about 900 J to about 1100 J, or about 1000 J to about 1100 J. In some embodiments, one or more near-infrared light source170are each configured to emit near infrared light in an irradiance (flux density) range of about 5 mW/cm2to about 100 mW/cm2. In some embodiments, near-infrared light source170has an irradiance (flux density) of, e.g., about 5 mW/cm2, about 10 mW/cm2, about 15 mW/cm2, about 20 mW/cm2, about 25 mW/cm2, about 30 mW/cm2, about 35 mW/cm2, about 40 mW/cm2, about 50 mW/cm2, about 60 mW/cm2, about 70 mW/cm2, about 80 mW/cm2, about 90 mW/cm2, or about 100 mW/cm2In some embodiments, near-infrared light source170has an irradiance (flux density) of, e.g., at least 5 mW/cm2, at least 10 mW/cm2, at least 15 mW/cm2, at least 20 mW/cm2, at least 25 mW/cm2, at least 30 mW/cm2, at least 35 mW/cm2, at least 40 mW/cm2, at least 50 mW/cm2, at least 60 mW/cm2, at least 70 mW/cm2, at least 80 mW/cm2, at least 90 mW/cm2, or at least 100 mW/cm2In some embodiments, near-infrared light source170has an irradiance (flux density) of, e.g., at most 5 mW/cm2, at most 10 mW/cm2, at most 15 mW/cm2, at most 20 mW/cm2, at most 25 mW/cm2, at most 30 mW/cm2, at most 35 mW/cm2, at most 40 mW/cm2, at most 50 mW/cm2, at most 60 mW/cm2, at most 70 mW/cm2, at most 80 mW/cm2, at most 90 mW/cm2, or at most 100 mW/cm2In some embodiments, near-infrared light source170has an irradiance (flux density) of, e.g., about 5 mW/cm2to about 10 mW/cm2, about 5 mW/cm2to about 15 mW/cm2, about 5 mW/cm2to about 20 mW/cm2, about 5 mW/cm2to about 25 mW/cm2, about 5 mW/cm2to about 30 mW/cm2, about 5 mW/cm2to about 35 mW/cm2, about 10 mW/cm2to about 15 mW/cm2, about 10 m W/cm2to about 20 mW/cm2, about 10 mW/cm2to about 25 mW/cm2, about 10 mW/cm2to about 30 mW/cm2, about 10 mW/cm2to about 35 mW/cm2, about 15 mW/cm2to about 20 mW/cm2, about 15 mW/cm2to about 25 mW/cm2, about 15 mW/cm2to about 30 mW/cm2, about 15 mW/cm2to about 35 mW/cm2, about 20 mW/cm2to about 25 mW/cm2, about 20 mW/cm2to about 30 mW/cm2, about 20 mW/cm2to about 35 mW/cm2, about 25 mW/cm2to about 30 mW/cm2, about 25 mW/cm2to about 35 mW/cm2, or about 30 mW/cm2to about 35 mW/cm2. In some embodiments, near-infrared light source170has an irradiance (flux density) of, e.g., about 20 mW/cm2to about 50 mW/cm2, about 20 mW/cm2to about 60 mW/cm2, about 20 mW/cm2to about 70 mW/cm2, about 20 mW/cm2to about 80 mW/cm2, about 20 mW/cm2to about 90 mW/cm2, about 20 mW/cm2to about 100 mW/cm2, about 30 mW/cm2to about 60 mW/cm2, about 30 mW/cm2to about 70 mW/cm2, about 30 mW/cm2to about 80 mW/cm2, about 30 mW/cm2to about 90 mW/cm2, about 30 mW/cm2to about 100 mW/cm2, about 40 mW/cm2to about 60 mW/cm2, about 40 mW/cm2to about 70 mW/cm2, about 40 mW/cm2to about 80 mW/cm2, about 40 mW/cm2to about 90 mW/cm2, about 40 mW/cm2to about 100 mW/cm2, about 50 mW/cm2to about 60 mW/cm2, about 50 mW/cm2to about 70 mW/cm2, about 50 mW/cm2to about 80 mW/cm2, about 50 mW/cm2to about 90 mW/cm2, about 50 mW/cm2to about 100 mW/cm2, about 60 mW/cm2to about 70 mW/cm2, about 60 mW/cm2to about 80 mW/cm2, about 60 mW/cm2to about 90 mW/cm2, about 60 mW/cm2to about 100 mW/cm2, about 70 mW/cm2to about 80 mW/cm2, about 70 mW/cm2to about 90 mW/cm2, about 70 mW/cm2to about 100 mW/cm2, about 80 mW/cm2to about 90 mW/cm2, about 80 mW/cm2to about 100 mW/cm2, or about 90 mW/cm2to about 100 mW/cm2. In some embodiments, one or more near-infrared light source170are each configured to emit near infrared light in an irradiance (flux density) range of about 100 mW/cm2to about 1000 mW/cm2. In some embodiments, near-infrared light source170has an irradiance (flux density) of, e.g., about 100 mW/cm2, about 200 mW/cm2, about 300 mW/cm2, about 400 mW/cm2, about 500 mW/cm2, about 600 mW/cm2, about 700 mW/cm2, about 800 mW/cm2, about 900 mW/cm2, or about 1000 mW/cm2. In some embodiments, near-infrared light source170has an irradiance (flux density) of, e.g., at least 100 mW/cm2, at least 200 mW/cm2, at least 300 mW/cm2, at least 400 mW/cm2, at least 500 mW/cm2, at least 600 mW/cm2, at least 700 mW/cm2, at least 800 mW/cm2, at least 900 mW/cm2, or at least 1000 mW/cm2. In some embodiments, near-infrared light source170has an irradiance (flux density) of, e.g., at most 100 mW/cm2, at most 200 mW/cm2, at most 300 mW/cm2, at most 400 mW/cm2, at most 500 mW/cm2, at most 600 mW/cm2, at most 700 mW/cm2, at most 800 mW/cm2, at most 900 mW/cm2, or at most 1000 mW/cm2. In some embodiments, near-infrared light source170has an irradiance (flux density) of, e.g., about 100 mW/cm2to about 200 mW/cm2, about 100 mW/cm2to about 300 mW/cm2, about 100 mW/cm2to about 400 mW/cm2, about 100 mW/cm2to about 500 mW/cm2, about 100 mW/cm2to about 600 mW/cm2, about 100 mW/cm2to about 700 mW/cm2, about 100 mW/cm2to about 800 mW/cm2, about 100 mW/cm2to about 900 mW/cm2, about 100 mW/cm2to about 1000 mW/cm2, about 200 mW/cm2to about 300 mW/cm2, about 200 mW/cm2to about 400 mW/cm2, about 200 mW/cm2to about 500 mW/cm2, about 200 mW/cm2to about 600 mW/cm2, about 200 mW/cm2to about 700 mW/cm2, about 200 mW/cm2to about 800 mW/cm2, about 200 mW/cm2to about 900 mW/cm2, about 200 mW/cm2to about 1000 mW/cm2, about 300 mW/cm2to about 400 mW/cm2, about 300 mW/cm2to about 500 mW/cm2, about 300 mW/cm2to about 600 mW/cm2, about 300 mW/cm2to about 700 mW/cm2, about 300 mW/cm2to about 800 mW/cm2, about 300 mW/cm2to about 900 mW/cm2, about 300 mW/cm2to about 1000 mW/cm2, about 400 mW/cm2to about 500 mW/cm2, about 400 mW/cm2to about 600 mW/cm2, about 400 mW/cm2to about 700 mW/cm2, about 400 mW/cm2to about 800 mW/cm2, about 400 mW/cm2to about 900 mW/cm2, about 400 mW/cm2to about 1000 mW/cm2, about 500 mW/cm2to about 600 mW/cm2, about 500 mW/cm2to about 700 mW/cm2, about 500 mW/cm2to about 800 mW/cm2, about 500 mW/cm2to about 900 mW/cm2, about 500 mW/cm2to about 1000 mW/cm2, about 600 mW/cm2to about 700 mW/cm2, about 600 mW/cm2to about 800 mW/cm2, about 600 mW/cm2to about 900 mW/cm2, about 600 mW/cm2to about 1000 mW/cm2, about 700 mW/cm2to about 800 mW/cm2, about 700 mW/cm2to about 900 mW/cm2, about 700 mW/cm2to about 1000 mW/cm2, about 800 mW/cm2to about 900 mW/cm2, about 800 mW/cm2to about 1000 mW/cm2, or about 900 mW/cm2to about 1000 mW/cm2. In some embodiments, one or more near-infrared light source170are each configured to emit near infrared light in a radiant exposure (fluence) range of about 5 J/cm2to about 100 J/cm2. In some embodiments, near-infrared light source170has a radiant exposure (fluence) of, e.g., about 5 J/cm2, about 10 J/cm2, about 15 J/cm2, about 20 J/cm2, about 30 J/cm2, about 40 J/cm2, about 50 J/cm2, about 70 J/cm2, about 70 J/cm2, about 75 J/cm2, about 80 J/cm2, about 90 J/cm2, or about 100 J/cm2In some embodiments, near-infrared light source170has a radiant exposure (fluence) of, e.g., at least 5 J/cm2, at least 10 J/cm2, at least 15 J/cm2, at least 20 J/cm2, at least 30 J/cm2, at least 40 J/cm2, at least 50 J/cm2, at least 70 J/cm2, at least 70 J/cm2, at least 75 J/cm2, at least 80 J/cm2, at least 90 J/cm2, or at least 100 J/cm2In some embodiments, near-infrared light source170has a radiant exposure (fluence) of, e.g., at most 5 J/cm2, at most 10 J/cm2, at most 15 J/cm2, at most 20 J/cm2, at most 30 J/cm2, at most 40 J/cm2, at most 50 J/cm2, at most 70 J/cm2, at most 70 J/cm2, at most 75 J/cm2, at most 80 J/cm2, at most 90 J/cm2, or at most 100 J/cm2. In some embodiments, near-infrared light source170has a radiant exposure (fluence) of, e.g., about 5 J/cm2to about 10 J/cm2, about 5 J/cm2to about 15 J/cm2, about 5 J/cm2to about 20 J/cm2, about 5 J/cm2to about 30 J/cm2, about 5 J/cm2to about 40 J/cm2, about 5 J/cm2to about 50 J/cm2, about 5 J/cm2to about 60 J/cm2, about 5 J/cm2to about 70 J/cm2, about 5 J/cm2to about 75 J/cm2, about 5 J/cm2to about 80 J/cm2, about 5 J/cm2to about 90 J/cm2, about 5 J/cm2to about 100 J/cm2, about 10 J/cm2to about 15 J/cm2, about 10 J/cm2to about 20 J/cm2, about 10 J/cm2to about 30 J/cm2, about 10 J/cm2to about 40 J/cm2, about 10 J/cm2to about 50 J/cm2, about 10 J/cm2to about 60 J/cm2, about 10 J/cm2to about 70 J/cm2, about 10 J/cm2to about 75 J/cm2, about 10 J/cm2to about 80 J/cm2, about 10 J/cm2to about 90 J/cm2, about 10 J/cm2to about 100 J/cm2, about 20 J/cm2to about 30 J/cm2, about 20 J/cm2to about 40 J/cm2, about 20 J/cm2to about 50 J/cm2, about 20 J/cm2to about 60 J/cm2, about 20 J/cm2to about 70 J/cm2, about 20 J/cm2to about 75 J/cm2, about 20 J/cm2to about 80 J/cm2, about 20 J/cm2to about 90 J/cm2, about 20 J/cm2to about 100 J/cm2, about 30 J/cm2to about 40 J/cm2, about 30 J/cm2to about 50 J/cm2, about 30 J/cm2to about 60 J/cm2, about 30 J/cm2to about 70 J/cm2, about 30 J/cm2to about 75 J/cm2, about 30 J/cm2to about 80 J/cm2, about 30 J/cm2to about 90 J/cm2, about 30 J/cm2to about 100 J/cm2, about 40 J/cm2to about 50 J/cm2, about 40 J/cm2to about 60 J/cm2, about 40 J/cm2to about 70 J/cm2, about 40 J/cm2to about 75 J/cm2, about 40 J/cm2to about 80 J/cm2, about 40 J/cm2to about 90 J/cm2, about 40 J/cm2to about 100 J/cm2, about 50 J/cm2to about 60 J/cm2, about 50 J/cm2to about 70 J/cm2, about 50 J/cm2to about 75 J/cm2, about 50 J/cm2to about 80 J/cm2, about 50 J/cm2to about 90 J/cm2, about 50 J/cm2to about 100 J/cm2, about 60 J/cm2to about 70 J/cm2, about 60 J/cm2to about 80 J/cm2, about 60 J/cm2to about 90 J/cm2, about 60 J/cm2to about 100 J/cm2, about 70 J/cm2to about 80 J/cm2, about 70 J/cm2to about 90 J/cm2, about 70 J/cm2to about 100 J/cm2, about 80 J/cm2to about 90 J/cm2, about 80 J/cm2to about 100 J/cm2, or about 90 J/cm2to about 100 J/cm2. In some embodiments, one or more near-infrared light source170are each configured to emit near infrared light in a radiant exposure (fluence) range of about 100 J/cm2to about 1000 J/cm2. In some embodiments, near-infrared light source170has a radiant exposure (fluence) of, e.g., about 100 J/cm2, about 200 J/cm2, about 300 J/cm2, about 400 J/cm2, about 500 J/cm2, about 600 J/cm2, about 700 J/cm2, about 800 J/cm2, about 900 J/cm2, or about 1000 J/cm2. In some embodiments, near-infrared light source170has a radiant exposure (fluence) of, e.g., at least 100 J/cm2, at least 200 J/cm2, at least 300 J/cm2, at least 400 J/cm2, at least 500 J/cm2, at least 600 J/cm2, at least 700 J/cm2, at least 800 J/cm2, at least 900 J/cm2, or at least 1000 J/cm2. In some embodiments, near-infrared light source170has a radiant exposure (fluence) of, e.g., at most 100 J/cm2, at most 200 J/cm2, at most 300 J/cm2, at most 400 J/cm2, at most 500 J/cm2, at most 600 J/cm2, at most 700 J/cm2, at most 800 J/cm2, at most 900 J/cm2, or at most 1000 J/cm2. In some embodiments, near-infrared light source170has a radiant exposure (fluence) of, e.g., about 100 J/cm2to about 200 J/cm2, about 100 J/cm2to about 300 J/cm2, about 100 J/cm2to about 400 J/cm2, about 100 J/cm2to about 500 J/cm2, about 100 J/cm2to about 600 J/cm2, about 100 J/cm2to about 700 J/cm2, about 100 J/cm2to about 800 J/cm2, about 100 J/cm2to about 900 J/cm2, about 100 J/cm2to about 1000 J/cm2, about 200 J/cm2to about 300 J/cm2, about 200 J/cm2to about 400 J/cm2, about 200 J/cm2to about 500 J/cm2, about 200 J/cm2to about 600 J/cm2, about 200 J/cm2to about 700 J/cm2, about 200 J/cm2to about 800 J/cm2, about 200 J/cm2to about 900 J/cm2, about 200 J/cm2to about 1000 J/cm2, about 300 J/cm2to about 400 J/cm2, about 300 J/cm2to about 500 J/cm2, about 300 J/cm2to about 600 J/cm2, about 300 J/cm2to about 700 J/cm2, about 300 J/cm2to about 800 J/cm2, about 300 J/cm2to about 900 J/cm2, about 300 J/cm2to about 1000 J/cm2, about 400 J/cm2to about 500 J/cm2, about 400 J/cm2to about 600 J/cm2, about 400 J/cm2to about 700 J/cm2, about 400 J/cm2to about 800 J/cm2, about 400 J/cm2to about 900 J/cm2, about 400 J/cm2to about 1000 J/cm2, about 500 J/cm2to about 600 J/cm2, about 500 J/cm2to about 700 J/cm2, about 500 J/cm2to about 800 J/cm2, about 500 J/cm2to about 900 J/cm2, about 500 J/cm2to about 1000 J/cm2, about 600 J/cm2to about 700 J/cm2, about 600 J/cm2to about 800 J/cm2, about 600 J/cm2to about 900 J/cm2, about 600 J/cm2to about 1000 J/cm2, about 700 J/cm2to about 800 J/cm2, about 700 J/cm2to about 900 J/cm2, about 700 J/cm2to about 1000 J/cm2, about 800 J/cm2to about 900 J/cm2, about 800 J/cm2to about 1000 J/cm2, or about 900 J/cm2to about 1000 J/cm2. In some embodiments, near-infrared light source170is a high powered infrared light source. In some embodiments, a high powered near-infrared light source has a radiant flux (power) of, e.g., about 400 mW, about 425 mW, about 450 mW, about 500 mW, about 525 mW, about 550 mW, about 575 mW or about 600 mW. In some embodiments, a high powered near-infrared light source has a radiant flux (power) of, e.g., at least 400 mW, at least 425 mW, at least 450 mW, at least500 mW, at least 525 mW, at least 550 mW, at least 575 mW or at least 600 mW. In some embodiments, a high powered near-infrared light source has a radiant flux (power) of, e.g., at most400 mW, at most 425 mW, at most 450 mW, at most500 mW, at most 525 mW, at most 550 mW, at most 575 mW or at most600 mW. In some embodiments, a high powered near-infrared light source has a radiant flux (power) of, e.g., about 400 mW to about 450 mW, about 400 mW to about 500 mW, about 400 mW to about 550 mW, about 400 mW to about 600 mW, about 450 mW to about 500 mW, about 450 mW to about 550 mW, about 450 mW to about 600 mW, about 500 mW to about 550 mW, about 500 mW to about 600 mW, or about 550 mW to about 600 mW. In some embodiments, a high powered near-infrared light source has a radiant flux (power) of, e.g., about 100 mW, about 200 mW, about 300 mW, about 400 mW, about 500 mW, about 600 mW, about 700 mW, about 800 mW, about 900 mW, or about 1000 mW. In some embodiments, a high powered near-infrared light source has a radiant flux (power) of, e.g., at least 100 mW, at least 200 mW, at least 300 mW, at least 400 mW, at least 500 mW, at least 600 mW, at least 700 mW, at least 800 mW, at least 900 mW, or at least 1000 mW. In some embodiments, a high powered near-infrared light source has a radiant flux (power) of, e.g., at most 100 mW, at most 200 mW, at most 300 mW, at most 400 mW, at most 500 mW, at most 600 mW, at most 700 mW, at most 800 mW, at most 900 mW, or at most 1000 mW. In some embodiments, a high powered near-infrared light source has a radiant flux (power) of, e.g., about 100 mW to about 200 mW, about 100 mW to about 300 mW, about 100 mW to about 400 mW, about 100 mW to about 500 mW, about 100 mW to about 600 mW, about 100 mW to about 700 mW, about 100 mW to about 800 mW, about 100 mW to about 900 mW, about 100 mW to about 1000 mW, about 200 mW to about 300 mW, about 200 mW to about 400 mW, about 200 mW to about 500 mW, about 200 mW to about 600 mW, about 200 mW to about 700 mW, about 200 mW to about 800 mW, about 200 mW to about 900 mW, about 200 mW to about 1000 mW, about 300 mW to about 400 mW, about 300 mW to about 500 mW, about 300 mW to about 600 mW, about 300 mW to about 700 mW, about 300 mW to about 800 mW, about 300 mW to about 900 mW, about 300 mW to about 1000 mW, about 400 mW to about 500 mW, about 400 mW to about 600 mW, about 400 mW to about 700 mW, about 400 mW to about 800 mW, about 400 mW to about 900 mW, about 400 mW to about 1000 mW, about 500 mW to about 600 mW, about 500 mW to about 700 mW, about 500 mW to about 800 mW, about 500 mW to about 900 mW, about 500 mW to about 1000 mW, about 600 mW to about 700 mW, about 600 mW to about 800 mW, about 600 mW to about 900 mW, about 600 mW to about 1000 mW, about 700 mW to about 800 mW, about 700 mW to about 900 mW, about 700 mW to about 1000 mW, about 800 mW to about 900 mW, about 800 mW to about 1000 mW, or about 900 mW to about 1000 mW. In some embodiments, a high powered near-infrared light source has a radiant intensity (brightness) of, e.g., about 150 mW/sr, about 200 mW/sr, about 250 mW/sr, about 300 mW/sr, about 350 mW/sr, about 400 mW/sr, about 450 mW/sr, about 500 mW/sr, about 550 mW/sr, about 600 mW/sr, about 650 mW/sr, about 700 mW/sr, or about 750 mW/sr. In some embodiments, a high powered near-infrared light source has a radiant intensity (brightness) of, e.g., at least 150 mW/sr, at least 200 mW/sr, at least 250 mW/sr, at least 300 mW/sr, at least 350 mW/sr, at least 400 mW/sr, at least 450 mW/sr, at least 500 mW/sr, at least 550 mW/sr, at least 600 mW/sr, at least 650 mW/sr, at least 700 mW/sr, or at least 750 mW/sr. In some embodiments, a high powered near-infrared light source has a radiant intensity (brightness) of, e.g., at most 150 mW/sr, at most 200 mW/sr, at most 250 mW/sr, at most 300 mW/sr, at most 350 mW/sr, at most 400 mW/sr, at most 450 mW/sr, at most 500 mW/sr, at most 550 mW/sr, at most 600 mW/sr, at most 650 mW/sr, at most 700 mW/sr, or at most 750 mW/sr. In some embodiments, a high powered near-infrared light source has a brightness range (or radiant intensity) of, e.g., about 150 mW/sr to about 200 mW/sr, about 150 mW/sr to about 300 mW/sr, about 150 mW/sr to about 400 mW/sr, about 150 mW/sr to about 500 mW/sr, about 150 mW/sr to about 600 mW/sr, about 150 mW/sr to about 700 mW/sr, about 150 mW/sr to about 800 mW/sr, about 200 mW/sr to about 300 mW/sr, about 200 mW/sr to about 400 mW/sr, about 200 mW/sr to about 500 mW/sr, about 200 mW/sr to about 600 mW/sr, about 200 mW/sr to about 700 mW/sr, about 200 mW/sr to about 800 mW/sr, about 300 mW/sr to about 400 mW/sr, about 300 mW/sr to about 500 mW/sr, about 300 mW/sr to about 600 mW/sr, about 300 mW/sr to about 700 mW/sr, about 300 mW/sr to about 800 mW/sr, about 400 mW/sr to about 500 mW/sr, about 400 mW/sr to about 600 mW/sr, about 400 mW/sr to about 700 mW/sr, about 400 mW/sr to about 800 mW/sr, about 500 mW/sr to about 600 mW/sr, about 500 mW/sr to about 700 mW/sr, about 500 mW/sr to about 800 mW/sr, about 600 mW/sr to about 700 mW/sr, about 600 mW/sr to about 800 mW/sr, or about 700 mW/sr to about 800 mW/sr. In some embodiments, near-infrared light source170is a low powered infrared light source. In some embodiments, a low powered near-infrared light source has a radiant flux (power) of, e.g., about 30 mW, about 35 mW, about 40 mW, about 45 mW, about 50 mW, about 55 mW, about 60 mW, about 65 mW, about 70 mW, or about 75 mW. In some embodiments, a low powered near-infrared light source has a radiant flux (power) of, e.g., at least 30 mW, at least 35 mW, at least 40 mW, at least 45 mW, at least 50 mW, at least 55 mW, at least 60 mW, at least 65 mW, at least 70 mW, or at least 75 mW. In some embodiments, a low powered near-infrared light source has a radiant flux (power) of, e.g., at most 30 mW, at most 35 mW, at most 40 mW, at most 45 mW, at most 50 mW, at most 55 mW, at most 60 mW, at most 65 mW, at most 70 mW, or at most 75 mW. In some embodiments, a low powered near-infrared light source has a radiant flux (power) of, e.g., about 30 mW to about 40 mW, about 30 mW to about 50 mW, about 30 mW to about 60 mW, about 30 mW to about 70 mW, about 30 mW to about 75 mW, about 40 mW to about 50 mW, about 40 mW to about 60 mW, about 40 mW to about 70 mW, about 40 mW to about 75 mW, about 50 mW to about 60 mW, about 50 mW to about 70 mW, about 50 mW to about 75 mW, about 60 mW to about 70 mW, or about 60 mW to about 75 mW. In some embodiments, a low powered near-infrared light source is configured to have a radiant intensity (brightness) of, e.g., about 25 mW/sr, about 50 mW/sr, about 75 mW/sr, about 100 mW/sr, about 125 mW/sr, or about 150 mW/sr. In some embodiments, a low powered near-infrared light source has a brightness (or radiant intensity) of, e.g., at least 25 mW/sr, at least 50 mW/sr, at least 75 mW/sr, at least 100 mW/sr, at least 125 mW/sr, or at least 150 mW/sr. In some embodiments, near-infrared light source170has a radiant intensity (brightness) of, e.g., at most 25 mW/sr, at most 50 mW/sr, at most 75 mW/sr, at most 100 mW/sr, at most 125 mW/sr, or at most 150 mW/sr. In some embodiments, a low powered near-infrared light source has a radiant intensity (brightness) of, e.g., about 25 mW/sr to about 50 mW/sr, about 25 mW/sr to about 75 mW/sr, about 25 mW/sr to about 100 mW/sr, about 25 mW/sr to about 125 mW/sr, about 25 mW/sr to about 150 mW/sr, about 50 mW/sr to about 75 mW/sr, about 50 mW/sr to about 100 mW/sr, about 50 mW/sr to about 125 mW/sr, about 50 mW/sr to about 150 mW/sr, about 75 mW/sr to about 100 mW/sr, about 75 mW/sr to about 125 mW/sr, about 75 mW/sr to about 150 mW/sr, about 100 mW/sr to about 125 mW/sr, about 100 mW/sr to about 150 mW/sr, or about 125 mW/sr to about 150 mW/sr. Referring toFIGS.9-15, photobiomodulation unit100also includes one or more sensors180configured to detect and collect information on one or more parameters including operational information of a photobiomodulation therapy garment20, biometric information of the user, or other useful information to ensure proper use and efficacy. Operational information includes, without limitation, positional and safety information of photobiomodulation therapy garment20. Biometric information includes, without limitation, body measurements and calculations related to the user. Non-limiting examples of a biometric sensor includes a neuro-conductivity sensor, a galvanometric sensor, an oxygen level sensor, a carbon dioxide level sensor, a brain oxygen level sensor, a heart rate sensor, a cortical blood flow sensor, a temperature sensor, an electroencephalogram sensor, or any combination thereof. One or more sensors180can also measure, record and analyze and/or transmit the information to controller200where measurement, recording and analysis of the information can be performed. In one or more embodiments, as shown inFIGS.9,11,15, &16, sensor cover79and one or more sensors180thereunder or nearby are positioned between second near-infrared light source grouping172and fifth near-infrared light source grouping175. This position of sensor cover79places it and one or more sensors180thereunder substantially over sagittal plane320of the forehead area. In some embodiments, where second near-infrared light source grouping172and/or fifth near-infrared light sources grouping175is not present, sensor cover79and one or more sensors180thereunder or nearby can be positioned on photobiomodulation therapy headband22at a position configured to place sensor cover79substantially on top of sagittal plane320when properly donned. In some embodiments, sensor cover79and one or more sensors180thereunder or nearby are positioned between first near-infrared light source grouping171and fourth near-infrared light source grouping174. In some embodiments, sensor cover79and one or more sensors180thereunder or nearby are positioned between third near-infrared light source grouping173and sixth near-infrared light source grouping176. In some embodiments, where two sensors180require to be spaced apart for proper functioning, one sensor cover79and one or more sensors180thereunder or nearby are positioned between first near-infrared light source grouping171and fourth near-infrared light source grouping174(or outside such groups in the direction towards left portion56) and one sensor cover79and one or more sensors180thereunder or nearby are positioned between third near-infrared light source grouping173and sixth near-infrared light source grouping176(or outside such groups in the direction towards right portion54). In some embodiments, sensors180include a heart rate sensor and a temperature sensor. Referring toFIGS.9&11-14, one or more sensor180includes cardiovascular sensor182which detects blood pulse, oxygen levels, as well as other cardiovascular characteristics. Referring toFIG.10, which is a longitudinal cross-section of sensor mounting portion142, cardiovascular sensor182comprises LED light sources184,186and a photodetector188. Light from LED light sources184,186is shone on blood vessels just under skin surface S; and the portion of light reflected back is captured by photodetector188. The signal from cardiovascular sensor182is transmitted to controller200for determining the user's cardiovascular parameters. Similarly, referring toFIGS.9&11-14, one or more sensor180includes a temperature sensor192which detects skin parameters, such as, e.g., skin temperature, skin density, and skin opaqueness (color). The signal from temperature sensor192is transmitted to controller200for determining the user's skin parameters. Photobiomodulation unit100can optionally include one or more stimulators194configured to administer a brain stimulatory or inhibitory signal. Non-limiting examples of a stimulator include a component that can generate a magnetic field useful for stimulating nerve cells in the brain, such as, e.g., a magnetic material of a material that can be magnetized using an electrical current (an electromagnet). Such a magnetic field generating component can be used to administer a transcranial magnetic stimulation therapy. In some embodiments, one or more stimulators194are operationally mounted to electronic circuity connector162or sensor liquid wire tube158which contains the electronic circuitry needed to establish electrical communication between each of the one or more stimulators194and connection terminal160. Referring toFIGS.1,6&7, photobiomodulation therapy garment20also includes controller200. In one or more embodiments, controller200includes a housing enclosing an input, a hardware processor, a memory, and an output, and may include one or more of each of these elements. In one or more example embodiments, controller200may include a single board computer, a system on a chip, or other similar and/or known computing devices or circuits. The inputs can include one or more USB connectors and/or a short-range wireless device (e.g., a BLUETOOTH module, a Wi-Fi module, or other wireless communications devices or systems) for communicating with an external computer, such as a smart phone, desktop, laptop, tablet, other wearable computing device, server, and the like. Controller200can operate autonomously or semi-autonomously, or may read executable software instructions, code, or other information from the memory or a computer-readable medium, or may receive information or instructions via the input from a user, from a healthcare provider, or any another source logically connected to a computer or device, such as another networked computer, server, or artificial intelligence (AI) or machine-learning system. In some embodiments, controller200can be remotely accessed and operated by a third-party individual, such as for example a healthcare worker, who can monitor usage of, change operational parameters for, and/or collect data from photobiomodulation therapy garment20, thereby providing a remote digital healthcare platform that assists a user in receiving the most effective biomodulation therapy. In some embodiments, controller200can be a “virtual controller” where access and operation of photobiomodulation therapy garment20by controller200is via cloud computing elements by either a healthcare provider, or any another source logically connected to a computer or device, such as another networked computer or server or AI or machine learning-based system. Controller200can be assessed and operated by pre-programed instructions and/or parameters, real-time instructions and/or parameters, or both. Controller200is programmed to supply an electrical signal which powers each of the one or more near-infrared light sources170, each of the one or more sensors180, and each of the one or more stimulators194. In addition, controller200in one or more embodiments is a computing device which is programmed or configured to implement the methods and algorithms which can operationally control each of the one or more near-infrared light sources170, each of the one or more sensors180, and each of the one or more stimulators194. For example, in some embodiments, controller200operational controls one or more of the operation times of the one or more near-infrared light sources170, the fluence level of the one or more near-infrared light sources170, the irradiance level of the one or more near-infrared light sources170, whether the one or more near-infrared light sources170are operated continuously or pulsed, which one or more of the one or more near-infrared light sources170are activated or deactivated, and predetermined dosimetry levels. In addition, controller200operationally controls each of the one or more sensors180and receives and analyzes information collected from each of the one or more sensors180. In some embodiments, controller200operationally controls one or more of the operation times of the one or more stimulators194, the power level of the one or more stimulators194, whether the one or more stimulators194are operated continuously or pulsed, which one or more of the one or more stimulators194are activated or deactivated, or any combination thereof. In some embodiments, controller200operationally instructs activating one or more infrared light sources170on the left side of midsagittal plane320and deactivating one or more infrared light sources170on the right side of midsagittal plane320, or vice versa. In some embodiments, controller200operationally instructs activating one or more infrared light sources170on the left side and right side of midsagittal plane320while activating one or more infrared light sources170on the left side of midsagittal plane320at a higher level of irradiance relative to one or more infrared light sources170on the right side of midsagittal plane320, or vice versa. In some embodiments, controller200dynamically adjusts operational parameters of photobiomodulation therapy garment20using information collected from each of the one or more sensors180, information provided by the user, or information remotely inputted by a third-party individual. Such information input is then processed by controller200relative to information stored in an operational database in one or more algorithms, and, based on the analysis performed in comparing collected or provided or inputted with information stored in such a database with the one or more algorithms, operational parameters of the one or more near-infrared light sources170, each of the one or more sensors180, and each of the one or more stimulators194are adjusted by executable instructions provided controller200. For example, cardiovascular sensor182obtains cardiovascular parameters from user during operation of photobiomodulation therapy garment20and this input information is analyzed against cardiovascular parameters stored in an operational database in order to assess actual cardiovascular parameters and adjust operation of photobiomodulation therapy garment20based on the therapy selected by the user or third-party individual. In some embodiments, a detected decrease in heart rate variability by cardiovascular sensor182and sent to controller200would result in controller200providing executable instructions to optimize the pulse wave by increasing the frequency of light emitted from the one or more near-infrared light source170in situations where a user or third-party individual has selected an alertness therapy. As an illustration, the initial pulse wave of photobiomodulation therapy garment20could be set at 40 Hz and based upon the detected heart rate variability controller200would increase the frequency of light emitted from the one or more near-infrared light source170to 50 Hz. Continuous monitoring and analysis of heart rate variability by cardiovascular sensor182and controller200could result in the 50 Hz pulse wave setting being maintained, or increased to 60 Hz or 70 Hz or more in order to establish the proper pulse wave for an alertness therapy being emitted from the one or more near-infrared light source170. Such dynamic monitoring of heart rate variability by cardiovascular sensor182and controller200would result in continuous adjustments to the pulse wave in order to achieve optimum pulse wave of the selected alertness therapy. In some embodiments, a detected increase in heart rate variability by cardiovascular sensor182and sent to controller200would result in controller200providing executable instructions to optimize the pulse wave by decreasing the frequency of light emitted from the one or more near-infrared light source170in situations where a user or third-party individual has selected a calmness or relaxation therapy. As an illustration, the initial pulse wave of photobiomodulation therapy garment20could be set at 40 Hz and based upon the detected heart rate variability controller200would decreasing the frequency of light emitted from the one or more near-infrared light source170to 30 Hz. Continuous monitoring and analysis of heart rate variability by cardiovascular sensor182and controller200could result in the 30 Hz pulse wave setting being maintained, or decreased to 10 Hz or 1 Hz in order to establish the proper pulse wave for a calmness or relaxation therapy being emitted from the one or more near-infrared light source170. Such dynamic monitoring of heart rate variability by cardiovascular sensor182and controller200would result in continuous adjustments to the pulse wave in order to achieve optimum pulse wave of the selected calmness or relaxation therapy. As another example, skin sensor192obtains information on skin parameters from a user during operation of photobiomodulation therapy garment20and this input information is analyzed against skin information stored in an operational database in order to assess actual skin parameters and adjust operation of photobiomodulation therapy garment20based on the therapy selected by the user or third-party individual. In some embodiments, a detected decrease in skin temperature by skin sensor192and sent to controller200would result in controller200providing executable instructions to optimize skin temperature by increasing the irradiance of light emitted from the one or more near-infrared light source170in situations where a user or third-party individual has selected an alertness therapy. As an illustration, the initial irradiance of photobiomodulation therapy garment20could be set at 250 mW/cm2and based upon the detected skin temperature controller200would increase the irradiance of light emitted from the one or more near-infrared light source170to 500 mW/cm2. Continuous monitoring and analysis of skin temperature by skin sensor192and controller200could result in the 500 mW/cm2irradiance setting being maintained, or increased to 750 mW/cm2or 1000 mW/cm2or more in order to establish the proper skin temperature for an alertness therapy. Such dynamic monitoring of skin temperature by skin sensor192and controller200would result in continuous adjustments to the irradiance in order to achieve optimum skin temperature of the selected alertness therapy. In some embodiments, a detected increase in skin temperature by skin sensor192and sent to controller200would result in controller200providing executable instructions to optimize skin temperature by decreasing the irradiance of light emitted from the one or more near-infrared light source170in situations where a user or third-party individual has selected a calmness or relaxation therapy. As an illustration, the initial irradiance of photobiomodulation therapy garment20could be set at 250 mW/cm2and based upon the detected skin temperature controller200would decrease the irradiance of light emitted from the one or more near-infrared light source170to 100 mW/cm2. Continuous monitoring and analysis of skin temperature by skin sensor192and controller200could result in the 125 mW/cm2irradiance setting being maintained, or decreased to 75 mW/cm2or 25 mW/cm2or less in order to establish the proper skin temperature for a calmness or relaxation therapy. Such dynamic monitoring of skin temperature by skin sensor192and controller200would result in continuous adjustments to the irradiance in order to achieve optimum skin temperature of the selected calmness or relaxation herapy. In some embodiments, a detected decrease in skin temperature by skin sensor192and sent to controller200would result in controller200providing executable instructions to optimize skin temperature by increasing the duty cycle of light emitted from the one or more near-infrared light source170in situations where a user or third-party individual has selected an alertness therapy. As an illustration, the initial duty cycle of photobiomodulation therapy garment20could be set at 50% and based upon the detected skin temperature controller200would increase the duty cycle of light emitted from the one or more near-infrared light source170to 60%. Continuous monitoring and analysis of skin temperature by skin sensor192and controller200could result in the 60% duty cycle setting being maintained, or increased to 75% or more in order to establish the proper skin temperature for an alertness therapy. Such dynamic monitoring of skin temperature by skin sensor192and controller200would result in continuous adjustments to the duty cycle in order to achieve optimum skin temperature of the selected alertness therapy. In some embodiments, a detected increase in skin temperature by skin sensor192and sent to controller200would result in controller200providing executable instructions to optimize skin temperature by decreasing the duty cycle of light emitted from the one or more near-infrared light source170in situations where a user or third-party individual has selected a calmness or relaxation therapy. As an illustration, the initial duty cycle of photobiomodulation therapy garment20could be set at 50% and based upon the detected skin temperature controller200would decrease the duty cycle of light emitted from the one or more near-infrared light source170to 40%. Continuous monitoring and analysis of skin temperature by skin sensor192and controller200could result in the 40% duty cycle setting being maintained, or decreased to 25% or less in order to establish the proper skin temperature for an alertness therapy. Such dynamic monitoring of skin temperature by skin sensor192and controller200would result in continuous adjustments to the duty cycle in order to achieve optimum skin temperature of the selected calmness or relaxation therapy. In some embodiments, a detected higher skin opacity, indicative skin with higher melanin content, by skin sensor192and sent to controller200would result in controller200providing executable instructions to optimize skin penetration by adjusting the wavelength of light emitted from the one or more near-infrared light source170, or a combination of wavelengths, in order to provide optimal light penetration for the selected therapy. As an illustration, the initial wavelength of photobiomodulation therapy garment20could be set to 900 nm and based upon the detected skin opacity controller200would increase the wavelength of light emitted from the one or more near-infrared light source170to about 970 nm. Continuous monitoring and analysis of skin opacity by skin sensor192and controller200could result in the wavelength setting being maintained, or increased to 1000 nm or more in order to establish the proper wavelength penetration into the skin for the selected therapy. Such dynamic monitoring of skin opacity by skin sensor192and controller200would result in continuous adjustments to the wavelength in order to achieve optimum skin penetration of the selected therapy. In some embodiments, a detected lower skin opacity, indicative skin with lower melanin content, by skin sensor192and sent to controller200would result in controller200providing executable instructions to optimize skin penetration by adjusting the wavelength of light emitted from the one or more near-infrared light source170, or a combination of wavelengths, in order to provide optimal light penetration for the selected therapy. As an illustration, the initial wavelength of photobiomodulation therapy garment20could be set to 900 nm and based upon the detected skin opacity controller200would decrease the wavelength of light emitted from the one or more near-infrared light source170to about 810 nm. Continuous monitoring and analysis of skin opacity by skin sensor192and controller200could result in the wavelength setting being maintained, or decreased to 790 nm or less in order to establish the proper wavelength penetration into the skin for the selected therapy. Such dynamic monitoring of skin opacity by skin sensor192and controller200would result in continuous adjustments to the wavelength in order to achieve optimum skin penetration of the selected therapy. In some embodiments, a detected higher skin density, indicative skin with higher fat content, by skin sensor192and sent to controller200would result in controller200providing executable instructions to optimize skin penetration by adjusting the wavelength of light emitted from the one or more near-infrared light source170in order to provide optimal light penetration for the selected therapy. As an illustration, the initial wavelength of photobiomodulation therapy garment20could be set to 900 nm and based upon the detected skin density controller200would increase the wavelength of light emitted from the one or more near-infrared light source170to about 970 nm. Continuous monitoring and analysis of skin density by skin sensor192and controller200could result in the wavelength setting being maintained, or increased to 1000 nm or more in order to establish the proper wavelength penetration into the skin for the selected therapy. Such dynamic monitoring of skin density by skin sensor192and controller200would result in continuous adjustments to the wavelength in order to achieve optimum skin penetration of the selected therapy. In some embodiments, a detected lower skin density, indicative skin with lower fat content, by skin sensor192and sent to controller200would result in controller200providing executable instructions to optimize skin penetration by adjusting the wavelength of light emitted from the one or more near-infrared light source170in order to provide optimal light penetration for the selected therapy. As an illustration, the initial wavelength of photobiomodulation therapy garment20could be set to 900 nm and based upon the detected skin density controller200would decrease the wavelength of light emitted from the one or more near-infrared light source170to about 810 nm. Continuous monitoring and analysis of skin density by skin sensor192and controller200could result in the wavelength setting being maintained, or decreased to 790 nm or less in order to establish the proper wavelength penetration into the skin for the selected therapy. Such dynamic monitoring of skin density by skin sensor192and controller200would result in continuous adjustments to the wavelength in order to achieve optimum skin penetration of the selected therapy. As another example, information can be provided by a user or a third-party individual during operation of photobiomodulation therapy garment20and this input information is either used directly in order to adjust operation of photobiomodulation therapy garment20based on the therapy selected by the user, or analyzed against user-defined or third-party individual defined information stored in an operational database in order to adjust operation of photobiomodulation therapy garment20based on the selected therapy. As an illustration, the initial therapy of photobiomodulation therapy garment20could be set to an alertness therapy and based upon user input (such as, e.g., “still tired” or “feel good”) or individual third-party input (based upon, e.g., monitoring of physiological or vital signs of user) controller200would adjust the characteristics of the light being emitted from the one or more near-infrared light source170. Continuous user or individual third-party input into controller200would establish the proper light characteristics for the selected alertness therapy. Such dynamic monitoring of user or individual third-party input into controller200would result in continuous adjustments to the light characteristics in order to achieve optimum effect of the selected alertness therapy. As another example, sensor180obtains information on mitochondrial functionality from a user during operation of photobiomodulation therapy garment20and this input information is analyzed against mitochondrial functionality information stored in an operational database in order to assess actual mitochondrial functionality and adjust operation of photobiomodulation therapy garment20based on the therapy selected by the user or third-party individual. As an illustration, the initial therapy of photobiomodulation therapy garment20could be set to an alertness therapy and based upon detected mitochondrial functionality (such as, e.g., NAD+or NADH levels) controller200would adjust the characteristics of the light being emitted from the one or more near-infrared light source170. Continuous monitoring and analysis of skin opacity by sensor180and controller200would establish the proper light characteristics for the selected alertness therapy being emitted from the one or more near-infrared light source170. Such dynamic monitoring of mitochondrial functionality by sensor180and controller200would result in continuous adjustments to the light characteristics in order to achieve optimum skin penetration of the selected alertness therapy. The adjustments described in the examples in the paragraphs above made by the controller200, and the processing performed therein on the various types of information, may be performed in conjunction with a machine learning-based framework that applies elements of artificial intelligence (AI) to analyze the information provided as input within models trained on historical or known data, such as that stored in the operational database(s) referenced above, to improve such adjustments to operational parameters of the one or more near-infrared light sources170, each of the one or more sensors180, and each of the one or more stimulators194. The present invention therefore may include such a machine learning-based framework, which may be comprised of multiple elements that perform, either together or as separately-instantiated models, several of the processing aspects performed by the controller200. The modeling performed within the machine learning-based framework may comprise many different types of machine learning, and apply many different mathematical approaches to analyzing information and generating outputs that improve outcomes in the continuous adjustments to the operational parameters of the one or more near-infrared light sources170, each of the one or more sensors180, and each of the one or more stimulators194that are described herein. For example, in some embodiments of the present invention, the machine learning-based framework may be comprised of algorithms that apply techniques of supervised learning, reinforced learning, and other approaches of machine learning and artificial intelligence to further evaluate inputs into the controller200. The machine learning-based framework may be comprised of any of several different mathematical approaches. These may include statistical analyses, which are non-deterministic mathematical approaches that enable calculation of probabilities that events will or will not occur. Regression analyses are types of statistical analyses where models are used for estimating the relationships between variables of interest, such as for example a dependent variable and one or more independent variables (often called ‘predictors’). This type of machine learning is used to infer causal relationships between the independent and dependent variables, and for prediction and forecasting of outcomes where such causal relationships are impactful on future states for application of the overall modeling being performed. There are many types of regression analyses, such as linear and non-linear regression, and specific approaches such as logistic regression, that enable the use of derived parameters to interpret the importance of maximum values in form of the log-odds when calculating probability values. For example, other types of logistic functions, and other types of regression analyses, may also be utilized to calculate probabilities in the present invention, and are within the scope of the present invention. Other approaches that may be utilized include, but are not limited to, decision trees, random forest classifiers, support vector machines, and probit. It is therefore to be further understood that the present invention, and the present specification, are not to be limited to any one type of mathematical model or statistical process mentioned herein, particularly as to its application in the one or more layers of machine learning. Modeling within the machine learning-based framework may also include applications of neural networks. Neural networks generally are comprised of nodes, which are computational units having one or more biased input/output connections. Such biased connections act as transfer (or activation) functions that combine inputs and outputs in some way. Nodes are organized into multiple layers that form the neural network. There are many types of neural networks, which are computing systems that “learn” to perform tasks, without being programmed with task-specific rules, based on examples. Neural networks generally are based on arrays of connected, aggregated nodes (or, “neurons”) that transmit signals to each other in the multiple layers over the biased input/output connections. Connections, as noted above, are activation or transfer functions which “fire” these nodes and combine inputs according to mathematical equations or formulas. Different types of neural networks generally have different configurations of these layers of connected, aggregated nodes, but they can generally be described as an input layer, a middle or ‘hidden’ layer, and an output layer. These layers may perform different transformations on their various inputs, using different mathematical calculations or functions. Signals are transmitted between nodes over connections, and the output of each node is calculated in a non-linear function that sums all of the inputs to that node. Weight matrices and biases are typically applied to each node, and each connection, and these weights and biases are adjusted as the neural network processes inputs and transmits them across the nodes and connections. These weights represent increases or decreases in the strength of a signal at a particular connection. Additionally, nodes may have a threshold, such that a signal is sent only if the aggregated output at that node crosses that threshold. Weights generally represent how long an activation function takes, while biases represent when, in time, such a function starts; together, they help gradients minimize over time. At least in the case of weights, they can be initialized and change (i.e., decay) over time, as a system learns what weights should be, and how they should be adjusted. In other words, neural networks evolve as they learn, and the mathematical formulas and functions that comprise neural networks design can change over time as a system improves itself. The application of neural networks within the machine learning-based framework may include instantiations of different networks for different purposes. These include both “production” neural network(s), configured to refine the algorithms performed within the overall modeling framework to generate output data (for example, as adjusted operational parameters of the one or more near-infrared light sources170, each of the one or more sensors180, and each of the one or more stimulators194), and “training” neural network(s), configured to train the production network(s) using improvements on the reasons for prior, historical outcomes that have been learned. Recurrent neural networks are a name given to types of neural networks in which connections between nodes follow a directed temporal sequence, allowing the neural network to model temporal dynamic behavior and process sequences of inputs of variable length. These types of neural networks are deployed where there is a need for recognizing, and/or acting on, such sequences. As with neural networks generally, there are many types of recurrent neural networks. Neural networks having a recurrent architecture may also have stored, or controlled, internal states which permit storage under direct control of the neural network, making them more suitable for inputs having a temporal nature. This storage may be in the form of connections or gates which act as time delays or feedback loops that permit a node or connection to retain data that is prior in time for modeling such temporal dynamic behavior. Such controlled internal states are referred to as gated states or gated memory, and are part of long short-term memory networks (LSTMs) and gated recurrent units (GRUs), which are names of different types of recurrent neural network architectures. This type of neural network design is utilized where desired outputs of a system are motivated by the need for memory, as storage, and as noted above, where the system is designed for processing inputs that are comprised of timed data sequences. Examples of such timed data sequences include video, speech recognition, and handwriting—where processing requires an analysis of data that changes temporally. In the present invention, where output data is in the form of operational parameters of the one or more near-infrared light sources170, each of the one or more sensors180, and each of the one or more stimulators194, an understanding of the influence of various events on a state over a period of time lead to more highly accurate and reliable operational parameters that may at least impact an amount of time that stimulation is provided. Many other types of recurrent neural networks exist. These include, for example, fully recurrent neural networks, Hopfield networks, bi-directional associative memory networks, echo state networks, neural Turing machines, and many others, all of which exhibit the ability to model temporal dynamic behavior. Any instantiation of such neural networks in the present invention may include one or more of these types, and it is to be understood that neural networks applied within the machine learning-based framework may include different ones of such types. Therefore, the present invention contemplates that many types of neural networks may be implemented, depending at least on the type of problem being analyzed Controller200reversibly connects to photobiomodulation therapy garment20by operationally engaging terminal rail mount164. Controller200may optionally include a rechargeable battery positioned within the housing. Controller200can be detached from terminal rail mount164for charging the rechargeable battery therein, by using a charging connector, such as USB-C, micro-USB, or the like. Further, the charging connector can provide wired data communication with a remote computer, such as a smart phone, laptop, desktop, or other computer device. In one or more embodiments, this enables tracking of usage, and/or updating or changing operational parameters such as desired dosimetry, duration, pulsed operation, etc., and/or to update the controller firmware, and/or to change the type of photobiomodulation therapy garment20to which controller200will be attached. Controller200can be a universal controller, such that controller200can be connected to multiple embodiments of photobiomodulation therapy garment20, such as photobiomodulation therapy headband22, a neck region garment, a posterior cervical region garment, a carpal region garment, an abdominal region garment, and the like, each being configured to cover their respective regions when donned. One or more near-infrared light sources170of photobiomodulation unit100can be positioned into one or more separate near-infrared light source groupings arranged in a number of intergroup patterns relative to each other and on the one or more regions of interest of skin region S to be treated by the photobiomodulation therapy. For example, there can be, e.g., one near-infrared light source grouping, two near-infrared light source groupings, three near-infrared light source groupings, four near-infrared light source groupings, five near-infrared light source groupings, six near-infrared light source groupings, seven near-infrared light source groupings, eight near-infrared light source groupings, nine near-infrared light source groupings, or ten near-infrared light source groupings. Each near-infrared light source groupings is spaced apart from the neighboring groupings, where the intergroup spacing can be the same between each near-infrared light source grouping or can vary according to the desired dosimetry and vary according to the relative locations of the desired regions of interest. The relative pattern of near-infrared light source groupings is configured to position each grouping on photobiomodulation therapy garment20to at least partially cover their respective regions of interest on skin surface S, which may appear to be a random pattern to the casual observer. The spacing of an intergroup pattern between each near-infrared light source grouping can be defined by a column distance d1and by a row distance d2. The intergroup distance can be measured from the centers of the light sources. In one or more embodiments, column distance d1and row distance d2are at least 5 mm, or at least 10 mm, or at least 15 mm, or at least 20 mm, or at least 25 mm, or at least 30 mm, or at least 35 mm, or at least 40 mm. In a rectangular array, row distance d2can be the same distance or differ from column distance d1. In some embodiments, photobiomodulation unit100of photobiomodulation therapy garment20comprise one or more near-infrared light source groupings. Each of the one or more near-infrared light source groupings are positioned in a pattern that is configured to direct each light source to a particular region of interest, when photobiomodulation therapy garment20is correctly positioned atop forehead of person P. In some embodiments, photobiomodulation unit100comprise one or more near-infrared light source groupings positioned to so that when photobiomodulation therapy garment20is properly donned, each of the one or more near-infrared light source groupings is position in a manner that at least partially overlays or is substantially centered on a primary acupuncture meridian, a major extraordinary vessel, a minor extraordinary vessel, or any combination thereof. A primary acupuncture meridian includes, without limitation, a heart meridian, a pericardium meridian, a lung meridian, a spleen meridian, a liver meridian, a kidney meridian, a small intestine meridian, a large intestine meridian, a triple energizer meridian, a stomach meridian, a gallbladder meridian, and a bladder meridian. A major extraordinary vessel includes, without limitation, a conception vessel and a governing vessel. A minor extraordinary vessel, a penetrating vessel, a girdling vessel, a yin linking vessel, a yin motility vessel, a yang linking vessel, and a yang motility vessel. In some embodiments, and as shown inFIGS.3,9,11,12,15, &16, photobiomodulation unit100comprises six near-infrared light source groupings, namely first near-infrared light source grouping171, second near-infrared light source grouping172, third near-infrared light source grouping173, fourth near-infrared light source grouping174, fifth near-infrared light source grouping175, and sixth near-infrared light source grouping176. In some embodiments, and referring toFIGS.3,15,16, but alsoFIGS.9&11, near-infrared light source groupings171,172,173,174,175,176of near-infrared light sources170present on photobiomodulation unit100are arranged in a rectangular array pattern, with three columns and two rows, with each near-infrared light source grouping separated by column distance dl and row distance d2. In these embodiments, near-infrared light source groupings171,172,173,174,175,176are positioned in a pattern that is configured to direct each light source to a particular region of interest, when photobiomodulation therapy headband22is correctly positioned atop forehead of person P. For example, in some embodiments, near-infrared light source groupings171,172,173,174,175,176are configured into photobiomodulation therapy headband22so that when donned atop forehead of person P, photobiomodulation therapy headband22is substantially positioned above superciliary arch region322in a manner that positions first, second, third, fourth, fifth and sixth near-infrared light source groupings171,172,173,174,175,176at least above eye sockets of person P. In some embodiments, each of near-infrared light source groupings171,172,173,174,175,176are positioned so that when photobiomodulation therapy headband22is properly donned, first near-infrared light source grouping171of near-infrared light source170is located in a first position that at least partially overlays or is substantially centered on Fp1 site300, second near-infrared light source grouping172of near-infrared light source170is located in a second position that at least partially overlays or is substantially centered on Fpz site302, third near-infrared light source grouping173of near-infrared light source170is located in a third position that at least partially overlays or is substantially centered on Fp2 site304, fourth near-infrared light source grouping174of near-infrared light source170is located in a fourth position that at least partially overlays or is substantially centered on F3 site306, fifth near-infrared light source grouping175of near-infrared light source170is located in a fifth position that at least partially overlays or is substantially centered on Fz site308, and sixth near-infrared light source grouping176of near-infrared light source170is located in a sixth position that at least partially overlays or is substantially centered on F4 site310. In some embodiments, and as shown inFIG.12, photobiomodulation unit100comprises six near-infrared light source groupings of near-infrared light source170, namely first near-infrared light source grouping171, second near-infrared light source grouping172, third near-infrared light source grouping173, fourth near-infrared light source grouping174, fifth near-infrared light source grouping175, and sixth near-infrared light source grouping176. The six near-infrared light source groupings are organized into two inverse triangles with first, second, third, and fourth near-infrared light source groupings171,172,173,174aligned in an upper row, and fifth and sixth near-infrared light source groupings175,176aligned in a lower row. First and second near-infrared light source groupings171,172are positioned to cover a region containing sites F3306and Fz308of head H and third and fourth light groupings173,174are positioned to cover a region containing site Fz308and F4310of head H. Fifth near-infrared light source grouping175is positioned to cover a region containing sites Fp1300and sixth near-infrared light source grouping176is positioned to cover a region containing sites Fp2304. In these embodiments, one or more sensors180positioned in the lower row in between fifth and sixth near-infrared light source groupings175,176. In some embodiments, and as shown inFIG.13, photobiomodulation unit100comprises five near-infrared light source groupings of near-infrared light source170, namely first near-infrared light source grouping171, second near-infrared light source grouping172, third near-infrared light source grouping173, fourth near-infrared light source grouping174, and fifth near-infrared light source grouping175. First, second, and third near-infrared light source groupings171,172,173, are aligned in an upper row, and fourth and fifth near-infrared light source groupings174,175aligned in a lower row, with fourth near-infrared light source grouping174located below first near-infrared light source grouping171and fifth near-infrared light source grouping175located below third near-infrared light source grouping173. First, second and third near-infrared light source groupings171,172,173are positioned to cover a region containing sites F3306, Fz308and F4310of head H. Fourth near-infrared light source grouping174is positioned to cover a region containing sites Fp1300and fifth near-infrared light source grouping175is positioned to cover a region containing sites Fp2304. In these embodiments, one or more sensors180are located in the lower row and are positioned below second near-infrared light source grouping172and in between fourth near-infrared light source grouping174and fifth near-infrared light source grouping175. In some embodiments, and as shown inFIG.14, photobiomodulation unit100comprises three near-infrared light source groupings of near-infrared light source170, namely first near-infrared light source grouping171, second near-infrared light source grouping172, and third near-infrared light source grouping173. First, second, and third near-infrared light source groupings171,172,173, are aligned in a row and are positioned to cover a region containing sites F3306, Fz308and F4310of head H. In these embodiments, one or more sensors180are positioned below second near-infrared light source grouping172. In some embodiments, each of near-infrared light source groupings comprises a single light near-infrared light source170. For example, and as shown inFIGS.11-14&16, each of near-infrared light source groupings171,172,173,174,175,176of photobiomodulation unit100comprises a single light near-infrared light source170. In embodiments where only a single light near-infrared light source170is present in a near-infrared light source grouping, such near-infrared light source170is preferably a high-powered near-infrared light source having a radiant intensity (brightness) range of about 150 mW/sr or more, and more preferably about 250 mW/sr or more. In some embodiments, each of near-infrared light source groupings comprises a plurality light near-infrared light sources170. For example, and as shown inFIGS.9&15, each of near-infrared light source groupings171,172,173,174,175,176of photobiomodulation unit100comprises nine light near-infrared light sources170. In embodiments where a plurality light near-infrared light sources170is present in a near-infrared light source grouping, such near-infrared light source170can all be low-powered near-infrared light source having a radiant intensity (brightness) range of 125 mW/sr or less. In other embodiments where a plurality light near-infrared light sources170is present in a near-infrared light source grouping, such near-infrared light source170can all be a combination of both high-powered near-infrared light source having a radiant intensity (brightness) range of about 150 mW/sr or more, and more preferably about 250 mW/sr or more and low-powered near-infrared light source having a radiant intensity (brightness) range of 125 mW/sr or less. Additionally, in embodiments where a near-infrared light source grouping comprises a plurality light near-infrared light sources170, there is an intragroup spacing between each individual near-infrared light source170and the neighboring near-infrared light sources170within the same group. The intragroup spacing of each near-infrared light source170of an a near-infrared light intragrouping can be the same between each individual near-infrared light source170or can vary according to the desired dosimetry and vary according to the relative locations of the desired regions of interest. In some embodiments, each individual near-infrared light source170of each of near-infrared light source grouping is arranged in a pattern configured for desired therapeutic effect, with each individual near-infrared light source170being randomly relative to the other individual near-infrared light sources170within the same near-infrared light source group, and/or in a pattern determined by a combination of factors including a desired therapeutic effect, cost, manufacturing capabilities, and so on. The relative pattern of near-infrared light source groupings is configured to position each near-infrared light source170on photobiomodulation therapy garment20to at least partially cover their respective regions of interest on skin surface S, which may appear to be a random pattern to the casual observer. Each individual near-infrared light source170within a near-infrared light intragrouping is spaced from all other individual near-infrared light sources170within the same intragrouping by an intragroup light source spacing. Each near-infrared light source170in a near-infrared light intragrouping can be arranged in a pattern that matches the location of multiple regions of interest on skin surface S. Thus, the resulting near-infrared light intragrouping can seemingly be arranged in irregular patterns that correspond to the location of multiple regions of interest on skin surface S, where each can be simultaneously at least partially covered by a corresponding grouping. As a result, the intergroup spacing and relative positioning of each near-infrared light source170in a near-infrared light intragrouping can vary according to the locations of the regions of interest on skin surface S. In some embodiments, for example in a rectangular arrangement as shown inFIG.3, the spacing between each near-infrared light source170of a near-infrared light intragrouping can be defined by a column spacing d3and by a row spacing d4. The intragroup spacing can be measured from the centers of the near-infrared light source170. In some embodiments, column spacing d3and row spacing d4are at least 2 mm, at least 3 mm, at least 4 mm, at least 5 mm, at least 6 mm, at least 7 mm, at least 8mm, at least 9 mm, at least 10 mm, at least 12 mm, or at least 15 mm. In a rectangular array, row spacing d3can be the same distance or differ from column spacing d4. In such rectangular arrangement, each near-infrared light source170of a near-infrared light intragrouping is arranged in a n1×n2 array, where n1 and n2 each represent the number of individual near-infrared light sources170in a row and column, respectively. For example, an infrared light intragrouping can be a 2×2 array, a 2×3 array, a 3×2 array, a 3×3 array, a 3×4 array, a 4×3 array, a 4×4 array, a 2×5 array, a 5×2 array, a 3×5 array, a 5×3 array, a 4×5 array, a 5×4 array, a 5×5 array, and so on. In some embodiments, each near-infrared light source170of a near-infrared light intragrouping can be configured as a radial or circular array about a single circle or multiple concentric circles. In these embodiments, an intergroup spacing can be measured from the centers of the light sources. In some embodiments, each of near-infrared light source of photobiomodulation therapy garment20is configured as near-infrared light intragrouping comprising a plurality of near-infrared light sources170arranged in an intragroup array. In some embodiments, near-infrared light source groups171,172,173,174,175,176of photobiomodulation therapy garment20are each configured as near-infrared light intragrouping comprising a plurality of near-infrared light sources170arranged in an intragroup array, with first near-infrared light source group171comprising a plurality of near-infrared light sources170arranged in a first near-infrared light intragrouping, second near-infrared light source group172comprising a plurality of near-infrared light sources170arranged in a second near-infrared light intragrouping, third near-infrared light source group173comprising a plurality of near-infrared light sources170arranged in a third near-infrared light intragrouping, fourth near-infrared light source group174comprising a plurality of near-infrared light sources170arranged in a fourth near-infrared light intragrouping, fifth near-infrared light source group175comprising a plurality of near-infrared light sources170arranged in a fifth near-infrared light intragrouping, and sixth near-infrared light source group176comprising a plurality of near-infrared light sources170arranged in a sixth near-infrared light intragrouping. In some embodiments, and referring toFIGS.3,9, &12, near-infrared light source groupings171,172,173,174,175,176of photobiomodulation therapy headband22are each configured as near-infrared light intragrouping comprising nine near-infrared light sources170arranged in a 3×3 array of three columns and three rows. In this example, d3is greater than d4, which can create therapeutic benefits due to the combined and overlapping light patterns incident on skin surface S surface, as well as strategic gaps or areas of lesser overlap of light patterns. In the illustrated example embodiment, d3=6 mm to 7 mm and d4=9 mm to 10 mm. The overlapping pattern of incident light creates regions of varying power levels incident on skin surface S within and around each array or grouping, with areas of maximum irradiance and fluence immediately beneath each individual near-infrared light source170, areas of lesser irradiance and fluence between closely situated individual near-infrared light sources170, and areas of least irradiance and fluence between individual near-infrared light sources170situated furthest from one another. Additionally, although each near-infrared light intragrouping of near-infrared light source groupings171,172,173,174,175,176is each illustrated as having the same intragroup pattern, each intragroup pattern can be configured with differing patterns and numbers of individual near-infrared light sources170, which can be determined based on the desired form of therapy and the dosimetry required for each region of interest. In one or more embodiments, in operation, groupings of near-infrared light sources can all be activated by controller200using the same operational parameters (e.g., all groupings simultaneously activated, all in pulsed mode, and all with the same power settings). In one or more embodiments, in operation, groupings of near-infrared light sources can each be activated by controller200with differing operational parameters, where one or more selected groupings may be activated, while other groupings remain off. Further, in one or more embodiments, controller200has capabilities to control the power level and/or pulsed/continuous operation for each grouping of near-infrared light sources independent of other groupings of near-infrared light sources on photobiomodulation therapy garment20. There is a great deal of flexibility in operational parameters available. Not only can each individual grouping of near-infrared lights be individually operated, each individual near-infrared light source170in each grouping of near-infrared lights can be individually addressed and controlled using individual operating parameters. In this way, each individual near-infrared light source170can be individually addressable as a unit such that each can activated/turned on or deactivated/turned off independent of all other individual near-infrared light sources170. Further, in one or more embodiments, each individual near-infrared light source170can be actuated in a pulsed or continuous mode independent of all other individual near-infrared light sources170. Additionally, in one or more embodiments, each individual near-infrared light source170can be actuated using a power profile independent of all other individual light sources. In this way, a number of predefined patterns can be initiated via executable instructions from controller200, where the patterns of activated light sources can change according to the desired therapeutic effect and location of regions of interests. Looking now atFIGS.8,15&16, in some embodiments, photobiomodulation therapy garment20is assembled by sandwiching photobiomodulation unit100between outer fabric sheet40and inner fabric sheet70. In some embodiments, and as shown inFIG.8, a hot melt adhesive film210, sized and shaped to cover a substantial portion or all of flexible printed circuit board assembly110, but not to cover terminal rail mount164of connection terminal160, is positioned between outer fabric sheet40and photobiomodulation unit100. In some embodiments, and as shown inFIGS.15&16, liquid wire circuit assembly150is affixed directly to outer fabric sheet40, e.g., by using an adhesive or weaving into outer fabric sheet40. In embodiments where photobiomodulation unit100comprises flexible printed circuit board assembly110, photobiomodulation unit100is aligned to outer fabric sheet40in a manner that allows terminal rail mount164of connection terminal160to be inserted through terminal rail mount opening58. In embodiments where photobiomodulation unit100comprises liquid wire circuit assembly150, connection terminal160is affixed to outer fabric sheet40during construction of liquid wire circuit assembly150onto outer fabric sheet40. Still referring toFIGS.8,15&16, once photobiomodulation unit100is position on outer fabric sheet40, a layer of double-sided tape220, sized and shaped to cover a substantial portion or all of photobiomodulation unit100, is positioned between photobiomodulation unit100and inner fabric sheet70. Double-sided tape220includes one or more near infrared light source openings222and one or more sensor openings224, each being cutouts configured to provide clearance for their respective components, such that double-sided tape220does not interfere with the operation of the one or more near-infrared light sources170and one or more sensors180. If present, sensor cover79is properly positioned over its corresponding sensor180. Inner fabric sheet70is then aligned with outer fabric sheet40and photobiomodulation unit100and positioned so that each of the one or more near-infrared light sources170and each of the one or more sensors180is properly positioned with their corresponding near infrared light source opening76and sensor opening78thereby permitting proper functioning of these components. Inner fabric sheet70can then be secured to outer fabric sheet40by sewing the edges of inner fabric sheet70to outer fabric sheet40. A photobiomodulation therapy garment disclosed herein is useful in providing a photobiomodulation therapy, including a transcranial photobiomoculation therapy. Such non-invasive light-based neuromodulation treatment requires no medication and provides long-lasting benefits by changing how a user's brain works from the neuron-level up by providing a variety of positive photochemical reactions. For example, a photobiomodulation therapy can increase neuronal mitochondria energy and adenosine triphosphate (ATP) production resulting in increased production of cellular energy. In addition, transfer of light energy can also trigger reactive oxygen species (ROS) production, which can regulate cellular and tissue-level inflammation and improve cellular repair and healing, and nitric oxide (NO) production which is critical for good blood vessel health and optimal blood flow, nutrient delivery and waste removal. This is important as inadequate cerebral blood flow and circulation can make the brain experience fuzzy memory, forgetfulness, poor concentration and even dementia. Enhanced cellular energy and increased cerebral blood flow result in increased neurogenesis and neuronal plasticity, increased neuroprotection, enhanced neural repair, and reduced inflammation. In addition, such photobiomodulation therapy provides both calming and relaxation benefits as well as improved focus and performance resulting in enhanced mental productivity, mental wellbeing and overall cognitive function. In some embodiments, a photobiomodulation therapy garment disclosed herein is used as the sole therapeutic device. In some embodiments, a photobiomodulation therapy garment disclosed herein is used in conjunction with another therapy. In some embodiments, a photobiomodulation therapy garment disclosed herein is used in conjunction with another cognitive behavioral therapy. In some embodiments, a photobiomodulation therapy garment disclosed herein is used in conjunction with another photobiomodulation therapy, such as, e.g., a high-power irradiance photobiomodulation therapy. In some embodiments, an individual undergoes a high-power transcranial photobiomodulation therapy using a stationary device capable of administering an irradiance of about 250 mW/cm2or more in conjunction with a low-power transcranial photobiomodulation therapy using a photobiomodulation therapy garment disclosed herein capable of administering an irradiance of about 55 mW/cm2or less. In some embodiments, a high-power photobiomodulation therapy is conducted in in a clinical or other healthcare facility setting while a low-power photobiomodulation therapy is conducted in a non-clinical setting, such as, e.g., at home, in a park, or when traveling in a vehicle. In some embodiments, a low-power transcranial photobiomodulation therapy is used to augment the effectiveness of a high-power transcranial photobiomodulation therapy and improve the treatment depression and depressive symptoms in the individual. In some embodiments, a circadian-based timing administration disclosed herein would be used to time the administration of a high-power transcranial photobiomodulation therapy, a low-power transcranial photobiomodulation therapy, or both. In some embodiments, a photobiomodulation therapy garment disclosed herein is used in conjunction with transcranial magnetic stimulation (TMS). In some embodiments, an individual undergoes a TMS in conjunction with a low-power transcranial photobiomodulation therapy using a photobiomodulation therapy garment disclosed herein capable of administering an irradiance of about 20 mW/cm2to about 500 mW/cm2. In some embodiments, a TMS is conducted in a clinical or other healthcare facility setting while a low-power photobiomodulation therapy is conducted in a non-clinical setting, such as, e.g., at home, in a park, or when traveling in a vehicle. In some embodiments, a low-power transcranial photobiomodulation therapy is used to augment the effectiveness of a TMS and improve the treatment depression and depressive symptoms in the individual. In some embodiments, a circadian-based timing administration disclosed herein would be used to time the administration of a TMS, a low-power transcranial photobiomodulation therapy, or both. In some embodiments, a photobiomodulation therapy garment disclosed herein is used in conjunction with an evidence-based mental health practice. In some embodiments, an individual undergoes an evidence-based mental health practice in conjunction with a low-power transcranial photobiomodulation therapy using a photobiomodulation therapy garment disclosed herein capable of administering an irradiance of about 55 mW/cm2or less. An evidence-based mental health practice includes, without limitation, Evidence Based Psychotherapy (EBT), Cognitive Behavioral Therapy (CBT), Dialectical Behavioral Therapy (DBT), Exposure Therapy, Functional Family Therapy (FFT), Assertive Community Treatment (ACT), Acceptance and Commitment Therapy (ACT), Prolonged Exposure Therapy (PE), Cognitive Training and Rehab, and Motivational Interviewing (MI). In some embodiments, an evidence-based mental health practice is conducted by a therapist in a clinical or other healthcare facility setting while the low-power photobiomodulation therapy is conducted in a non-clinical setting, such as, e.g., at home, in a park, or when traveling in a vehicle. In some embodiments, an evidence-based mental health practice is conducted by a therapist in a virtual setting while the low-power photobiomodulation therapy is conducted in a non-clinical setting, such as, e.g., at home, in a park, or when traveling in a vehicle. In some embodiments, an evidence-based mental health practice is a digital-based Artificial Intelligence (AI) therapy while the low-power photobiomodulation therapy is conducted in a non-clinical setting, such as, e.g., at home, in a park, or when traveling in a vehicle. In some embodiments, a low-power transcranial photobiomodulation therapy is used to augment the effectiveness of an evidence-based mental health practice and improve the treatment depression and depressive symptoms in the individual. In some embodiments, a circadian-based timing administration disclosed herein would be used to time the administration of an evidence-based mental health practice, a low-power transcranial photobiomodulation therapy, or both. In some embodiments, a photobiomodulation therapy garment disclosed herein is used in conjunction with an ocular light therapy, such as, e.g., a bright light therapy or blue light therapy. In some embodiments, an individual undergoes an ocular light therapy in conjunction with a transcranial photobiomodulation therapy using a photobiomodulation therapy garment disclosed herein capable of administering an irradiance of about 20 mW/cm2to about 500 mW/cm2. In some embodiments, a transcranial photobiomodulation therapy can be administered on a daily basis during an ocular light therapy and/or between each of two or more ocular light therapies. In some embodiments, the transcranial photobiomodulation therapy is used to augment the effectiveness of an ocular light therapy by enhancing relaxation, calmness, and well-being. In some embodiments, a circadian-based timing administration disclosed herein would be used to time the administration of an ocular light therapy, a transcranial photobiomodulation therapy, or both. In some embodiments, a photobiomodulation therapy garment disclosed herein is used in conjunction with a mindfulness therapy. In some embodiments, an individual practices a mindfulness therapy in conjunction with a transcranial photobiomodulation therapy using a photobiomodulation therapy garment disclosed herein capable of administering an irradiance of about 20 mW/cm2to about 300 mW/cm2. In some embodiments, a transcranial photobiomodulation therapy can be administered on a daily basis during a mindfulness therapy and/or between each of two or more mindfulness therapies. In some embodiments, a transcranial photobiomodulation therapy is used to augment the effectiveness of a mindfulness therapy by enhancing relaxation, calmness, and well-being. In some embodiments, a circadian-based timing administration disclosed herein would be used to time the administration of a mindfulness therapy, a transcranial photobiomodulation therapy, or both. In some embodiments, a photobiomodulation therapy garment disclosed herein is used in conjunction with a meditative therapy. In some embodiments, an individual practices a meditative therapy in conjunction with a transcranial photobiomodulation therapy using a photobiomodulation therapy garment disclosed herein capable of administering an irradiance of about 20 mW/cm2to about 300 mW/cm2. In some embodiments, a transcranial photobiomodulation therapy can be administered on a daily basis during a meditative therapy and/or between each of two or more meditative therapies. In some embodiments, a transcranial photobiomodulation therapy is used to augment the effectiveness of a meditative therapy by enhancing relaxation, calmness, and well-being. In some embodiments, a circadian-based timing administration disclosed herein would be used to time the administration of a meditative therapy, a transcranial photobiomodulation therapy, or both. In some embodiments, a photobiomodulation therapy using a photobiomodulation therapy garment disclosed herein, whether alone or in conjunction with another therapy, is administered based on a circadian rhythm of an individual. In some embodiments, an individual undergoes a transcranial photobiomodulation therapy using a photobiomodulation therapy garment disclosed herein in the morning hours, such as, e.g., between 6:00 am and 10:00 am. In some embodiments, an individual undergoes a transcranial photobiomodulation therapy using a photobiomodulation therapy garment disclosed herein in the afternoon/early evening hours, such as, e.g., between 3:00 pm and 7:00 pm. A photobiomodulation therapy garment disclosed herein capable of administering an irradiance of about 20 mW/cm2to about 500 mW/cm2would be used in such a circadian-based timing administration. In some embodiments, a circadian-based timing administration would be useful for the treatment depression and depressive symptoms in the individual. Aspects of the present specification may also be described by the following embodiments: 1. A photobiomodulation therapy garment worn atop a skin surface having a region of interest, the photobiomodulation therapy garment comprising a flexible outer sheet; a flexible inner sheet having a portion to permit passage of near-infrared light therethrough, the inner sheet being configured to face the skin surface; a flexible circuit board positioned between the outer sheet and the inner sheet; a near-infrared light source mounted on the flexible circuit board and aligned with the portion of the flexible inner sheet, the near-infrared light source configured to emit near-infrared light at a wavelength between 600 nm to 1600 nm and at a predetermined dosimetry directed at the region of interest on the skin surface during a photobiomodulation treatment; and a controller having a processor and a memory, the controller being in electrical communication with the near-infrared light source through the flexible circuit board, the processor and the memory configured with executable instructions for controlling one or more of a light source operation time, a light source fluence level, a light source irradiance level, a light source pulsed operation, and a light source continuous operation. 2. The photobiomodulation therapy garment of embodiment 1 wherein the near-infrared light source is part of a grouping of near-infrared light sources arranged on the flexible circuit board and configured to be directed at the region of interest on the skin surface during the photobiomodulation treatment. 3. The photobiomodulation therapy garment of embodiments 1 or 2 wherein the grouping of near-infrared light sources are arranged with an intragroup light source spacing of at least 2 mm therebetween, or at least 3 mm therebetween, or at least 4 mm therebetween, or at least 5 mm therebetween, or at least 6 mm therebetween, or at least 7 mm therebetween, or at least 8 mm therebetween, or at least 9 mm therebetween, or at least 10 mm therebetween. 4. The photobiomodulation therapy garment of any one of embodiments 1-3 wherein the region of interest is one of an Fp1 site, an Fpz site, an Fp2 site, an F3 site, an Fz site, and an F4 site, a posterior cervical site, a carpal site, and an abdominal site, the grouping of the near-infrared light sources is configured to at least partially overlay the region of interest. 5. The photobiomodulation therapy garment of any one of embodiments 1-4 wherein a sensor is configured to detect one or more parameters indicative of a position and thereafter transmit a position signal to the controller so that the position on the skin surface can be determined. 6. The photobiomodulation therapy garment of any one of embodiments 1-5 wherein the grouping of near-infrared light sources is configured to at least partially overlay an Fp1 site, a second grouping of near-infrared light sources is configured to at least partially overlay an Fpz site, a third grouping of near-infrared light sources is configured to at least partially overlay an Fp2 site, a fourth grouping of near-infrared light sources is configured to at least partially overlay an F3 site, a fifth grouping of near-infrared light sources is configured to at least partially overlay an Fz site, a sixth grouping of near-infrared light sources is configured to at least partially overlay an F4 site. 7. The photobiomodulation therapy garment of any one of embodiments 1-6 wherein a sensor is configured to detect one or more parameters indicative of a position and thereafter transmit a position signal to the controller so that the position on the skin surface can be determined, the sensor is positioned between the second grouping of near-infrared light sources and the fifth grouping of near-infrared light sources. 8. The photobiomodulation therapy garment of any one of embodiments 1-7 wherein each of the grouping of near-infrared light sources, the second grouping of near-infrared light sources, the third grouping of near-infrared light sources, the fourth grouping of near-infrared light sources, the fifth grouping of near-infrared light sources, and the sixth grouping of near-infrared light sources are minimally separated from one another by an intergroup light source spacing that is greater than 5 mm, or that is greater than 10 mm, or that is greater than 15 mm, or that is greater than 20 mm, or that is greater than 25 mm, or that is greater than 30 mm. 9. The photobiomodulation therapy garment of any one of embodiments 1-8 wherein the grouping of near-infrared light sources is arranged in a first 3×3 array. 10. The photobiomodulation therapy garment of any one of embodiments 1-9 further comprising a second grouping of near-infrared light sources arranged in a second 3×3 array, a third grouping of near-infrared light sources arranged in a third 3×3 array, a fourth grouping of near-infrared light sources arranged in a fourth 3×3 array, a fifth grouping of near-infrared light sources arranged in a fifth 3×3 array, and a sixth grouping of near-infrared light sources arranged in a sixth 3×3 array. 11. The photobiomodulation therapy garment of any one of embodiments 1-10 wherein each of the first 3×3 array, the second 3×3 array, the third 3×3 array, the fourth 3×3 array, the fifth 3×3 array, and the sixth 3×3 array are minimally separated from one another by an intergroup light source spacing that is greater than 5 mm, or that is greater than 10 mm, or that is greater than 15 mm, or that is greater than 20 mm, or that is greater than 25 mm, or that is greater than 30 mm. 12. The photobiomodulation therapy garment of any one of embodiments 1-11 wherein each of the first 3×3 array, the second 3×3 array, the third 3×3 array, the fourth 3×3 array, the fifth 3×3 array, and the sixth 3×3 array are minimally separated from one another by an intergroup light source spacing sufficient to prevent substantial light bleed therebetween. 13. The photobiomodulation therapy garment of any one of embodiments 1-12 wherein the region of interest is one or more of an Fp1 site, an Fpz site, an Fp2 site, an F3 site, an Fz site, an F4 site, a posterior cervical site, a carpal site, and an abdominal site on the skin surface. 14. The photobiomodulation therapy garment of any one of embodiments 1-13 wherein the near-infrared light source is configured to emit near-infrared light directed to an Fp1 site, a second near-infrared light source is configured to emit near-infrared light directed to an Fpz site, third near-infrared light source is configured to emit near-infrared light directed to an Fp2 site, a fourth near-infrared light source is configured to emit near-infrared light directed to an F3 site, a fifth near-infrared light source is configured to emit near-infrared light directed to an Fz site, a sixth near-infrared light source is configured to emit near-infrared light directed to an F4 site, wherein the Fp1 site is the region of interest, the Fpz site is a second region of interest, the Fp2 site is a third region of interest, the F3 site is a fourth region of interest, the Fz site is a fifth region of interest, and the F4 site is a sixth region of interest. 15. The photobiomodulation therapy garment of any one of embodiments 1-14 wherein a sensor is configured to detect one or more parameters indicative of a position and thereafter transmit a position signal to the controller so that the position can be determined on the skin surface, wherein the sensor is positioned between the second near-infrared light source array and the fifth near-infrared light source. 16. The photobiomodulation therapy garment of any one of embodiments 1-15 wherein the sensor is one or both of a heart rate sensor and a temperature sensor. 17. The photobiomodulation therapy garment of any one of embodiments 1-16 wherein each of the near-infrared light source, the second near-infrared light source, the third near-infrared light source, the fourth near-infrared light source, the fifth near-infrared light source, and the sixth near-infrared light source are separated from one another by a light source spacing that is greater than 5 mm, or that is greater than 10 mm, or that is greater than 15 mm, or that is greater than 20 mm, or that is greater than 25 mm, or that is greater than 30 mm. 18. The photobiomodulation therapy garment of any one of embodiments 1-9 wherein the near-infrared light source is a first grouping of near-infrared light sources, the second near-infrared light source is a second grouping of near-infrared light sources, the third near-infrared light source is a third grouping of near-infrared light sources, the fourth near-infrared light source is a fourth grouping of near-infrared light sources, the fifth near-infrared light source is a fifth grouping of near-infrared light sources, and the sixth near-infrared light source is a sixth grouping of near-infrared light sources. 19. The photobiomodulation therapy garment of any one of embodiments 1-18 wherein the grouping of near-infrared light sources is arranged in a first 3×3 array, the second grouping of near-infrared light sources is arranged in a second 3×3 array, the third grouping of near-infrared light sources is arranged in a third 3×3 array, the fourth grouping of near-infrared light sources is arranged in a fourth 3×3 array, the fifth grouping of near-infrared light sources is arranged in a fifth 3×3 array, and the sixth grouping of near-infrared light sources is arranged in a sixth 3×3 array. 20. The photobiomodulation therapy garment of any one of embodiments 1-19 wherein each of the first 3×3 array, the second 3×3 array, the third 3×3 array, the fourth 3×3 array, the fifth 3×3 array, and the sixth 3×3 array are minimally separated from one another by an intergroup light source spacing that is greater than 5 mm, or that is greater than 10 mm, or that is greater than 15 mm, or that is greater than 20 mm, or that is greater than 25 mm, or that is greater than 30 mm. 21. The photobiomodulation therapy garment of any one of embodiments 1-20 wherein each of the first 3×3 array, the second 3×3 array, the third 3×3 array, the fourth 3×3 array, the fifth 3×3 array, and the sixth 3×3 array are separated from one another by an intergroup light source spacing sufficient to prevent substantial light bleed therebetween. 22. The photobiomodulation therapy garment of any one of embodiments 1-21 further comprising one or more stimulators. 23. The photobiomodulation therapy garment of embodiment 22, wherein the one or more stimulators include a component that can generate a magnetic field. 24. A photobiomodulation therapy garment comprising a garment structure configured to be donned by a user atop a skin surface; a first near-infrared light source integrated with the garment structure; a second near-infrared light source integrated with the garment structure and spaced apart from the first near-infrared light source, the first near-infrared light source and the second near-infrared light source configured to emit near-infrared light at a wavelength between 600 nm to 1600 nm and at a predetermined dosimetry, the first near-infrared light source configured to be directed toward a first region of interest on the skin surface and the second near-infrared light source configured to be directed toward a second region of interest on the skin surface when donned during a photobiomodulation treatment; and a controller having a processor and a memory, the controller being in electrical communication with the first near-infrared light source and the second near-infrared light source, and configured with executable instructions for independently controlling the operation of each of the first near-infrared light source and the second near-infrared light source. 25. The photobiomodulation therapy garment of embodiment 24 wherein the executable instructions are configured for controlling one or more of a light source operation time, a light source fluence level, a light source irradiance level, a light source pulsed operation, and a light source continuous operation. 26. The photobiomodulation therapy garment of embodiments 24 or 25 wherein a sensor is integrated with the garment structure and configured to detect one or more parameters indicative of a reference position on the skin surface, such that when the sensor is positioned atop a reference position on the skin surface the first near-infrared light source will be positioned atop the first region of interest of the skin surface and the second near-infrared light source will be positioned atop the second region of interest of the skin surface. 27. The photobiomodulation therapy garment of any one of embodiments 24-26 wherein the sensor is one or both of a heart rate sensor and a temperature sensor. 28. The photobiomodulation therapy garment of any one of embodiments 24-27 wherein the first near-infrared light source is part of a first grouping of near-infrared light sources and the second near-infrared light source is part of a second grouping of near-infrared light sources. 29. The photobiomodulation therapy garment of any one of embodiments 24-28 wherein each of the first grouping of near-infrared light sources and the second grouping of near-infrared light sources are arranged with an intragroup light source spacing of at least 2 mm therebetween, or at least 3 mm therebetween, or at least 4 mm therebetween, or at least 5 mm therebetween, or at least 6 mm therebetween, or at least 7 mm therebetween, or at least 8 mm therebetween, or at least 9 mm therebetween, or at least 10 mm therebetween. 30. The photobiomodulation therapy garment of any one of embodiments 24-29 wherein the first grouping of near-infrared light sources and the second grouping of near-infrared light sources are minimally separated from one another by an intergroup light source spacing that is greater than 5 mm, or that is greater than 10 mm, or that is greater than 15 mm, or that is greater than 20 mm, or that is greater than 25 mm, or that is greater than 30 mm. 31. The photobiomodulation therapy garment of any one of embodiments 24-30 wherein the first grouping of near-infrared light sources is arranged in a first 3×3 array and the second grouping of near-infrared light sources is arranged in a second 3×3 array. 32. The photobiomodulation therapy garment of any one of embodiments 24-31 further comprising a third grouping of near-infrared light sources arranged in a third 3×3 array, a fourth grouping of near-infrared light sources arranged in a fourth 3×3 array, a fifth grouping of near-infrared light sources arranged in a fifth 3×3 array, and a sixth grouping of near-infrared light sources arranged in a sixth 3×3 array. 33. The photobiomodulation therapy garment of any one of embodiments 24-32 wherein a Fp1 site is the first region of interest, a Fpz site is the second region of interest, a Fp2 site is a third region of interest, a F3 site is a fourth region of interest, a Fz site is a fifth region of interest, and a F4 site is a sixth region of interest; and the first 3×3 array is configured to emit near-infrared light directed to the Fp1 site, the second 3×3 array is configured to emit near-infrared light directed to the Fpz site, the third 3×3 array is configured to emit near-infrared light directed to the Fp2 site, the fourth 3×3 array is configured to emit near-infrared light directed to the F3 site, the fifth 3×3 array is configured to emit near-infrared light directed to the Fz site, the sixth 3×3 array is configured to emit near-infrared light directed to the F4 site. 34. The photobiomodulation therapy garment of any one of embodiments 24-33 wherein the first region of interest is one of an Fp1 site, an Fpz site, an Fp2 site, an F3 site, an Fz site, and an F4 site, a posterior cervical site, a carpal site, and an abdominal site. 35. The photobiomodulation therapy garment of any one of embodiments 24-34 wherein the second region of interest is one of an Fp1 site, an Fpz site, an Fp2 site, an F3 site, an Fz site, and an F4 site, a posterior cervical site, a carpal site, and an abdominal site. 36. The photobiomodulation therapy garment of any one of embodiments 24-35 further comprising a third near-infrared light source, a fourth near-infrared light source, a fifth near-infrared light source, and a sixth near-infrared light source. 37. The photobiomodulation therapy garment of any one of embodiments 24-36 wherein a Fp1 site is the first region of interest, a Fpz site is the second region of interest, a Fp2 site is a third region of interest, a F3 site is a fourth region of interest, a Fz site is a fifth region of interest, and a F4 site is a sixth region of interest; and the first near-infrared light source is configured to emit near-infrared light directed to the Fp1 site, the second near-infrared light source is configured to emit near-infrared light directed to the Fpz site, the third near-infrared light source is configured to emit near-infrared light directed to the Fp2 site, the fourth near-infrared light source is configured to emit near-infrared light directed to the F3 site, the fifth near-infrared light source is configured to emit near-infrared light directed to the Fz site, the sixth near-infrared light source is configured to emit near-infrared light directed to the F4 site. 38. The photobiomodulation therapy garment of any one of embodiments 24-37 further comprising one or more stimulators. 39. The photobiomodulation therapy garment of embodiment 38, wherein the one or more stimulators include a component that can generate a magnetic field. Aspects of the present specification may also be described by the following embodiments: 1. A photobiomodulation therapy garment comprising: a garment configured to be donned by a user atop a skin surface, the garment comprising a first surface and a second surface opposite the first surface, the first surface being configured to face the skin surface once the garment is donned, and a photobiomodulation unit integrated within the garment, the photobiomodulation unit comprising a connection terminal, one or more near-infrared light sources, and one or more sensors, the connection terminal in electronic communication with the one or more near-infrared light sources and one or more sensors, wherein the one or more near-infrared light sources are each configured to emit near-infrared light at a wavelength between 600 nm to 1600 nm and at a predetermined dosimetry, a controller, the controller including a processor and a memory, the controller configured to operationally engage a terminal rail of the connection terminal in an manner that establishes electronic communication between the controller and the connection terminal; wherein the first surface of the garment includes a first portion comprising one or more light openings, with each of the one or more near-infrared light sources being in operational alignment with the one or more light openings to permit proper passage of near-infrared light from the one or more near-infrared light sources therethrough, wherein the first surface of the garment includes a second portion comprising one or more sensor openings with each of the one or more sensors being in operational alignment with the one or more sensor openings to permit proper functionality of the one or more sensors therethrough, and wherein the processor and the memory configured with executable instructions for independently controlling each of the one or more near-infrared light sources and each of each of the one or more sensors. 2. The photobiomodulation therapy garment of embodiment 1, wherein the garment is configured to wrap about or conform to a body part region, with the capability to be moved from one body part region to another body part region on the body. 3. The photobiomodulation therapy garment of embodiment 2, wherein the body part region is a head region, a neck region, a shoulder region, a torso region, a hand region, a wrist region, an arm region, a foot region, or a leg region, or any combination thereof. 4. The photobiomodulation therapy garment of embodiment 2, wherein the garment is a band, a wrap, a scarf, a shawl, a cloak, a robe, or a blanket. 5. The photobiomodulation therapy garment of embodiment 1, wherein the garment is sized and dimensioned to specifically fit a particular body part. 6. The photobiomodulation therapy garment of embodiment 5, wherein the particular body part is a head region, a neck region, a shoulder region, a torso region, a hand region, a wrist region, an arm region, a foot region, or a leg region, or any combination thereof. 7. The photobiomodulation therapy garment of embodiment 4, wherein the garment is a hat, a visor, a shirt, a pants, a sock, a glove, or an undergarment. 8. The photobiomodulation therapy garment of any one of embodiments 1-7, wherein each of the one or more near-infrared light sources is a near-infrared light emitting diode. 9. The photobiomodulation therapy garment of any one of embodiments 1-8, wherein each of the one or more sensors is configured to detect and collect information on one or more parameters of the garment, the photobiomodulation unit and components therein, the controller and components therein, and the user, and thereafter transmit the information to the controller. 10. The photobiomodulation therapy garment of embodiment 9, wherein the one or more parameters includes operational information of the garment, the photobiomodulation unit and components therein, and the controller and components therein, biometric information on the user, or any combination thereof. 11. The photobiomodulation therapy garment of any one of embodiments 1-10, wherein the executable instructions independently control each of the one or more near-infrared light sources. 12. The photobiomodulation therapy garment of embodiment 11, wherein the executable instructions control activation, duration of activation, deactivation, duration of deactivation, a pattern and timing of activation, a pattern and timing of deactivation, a fluence level, an irradiance level, a dosimetry level, a pulsed operation, a continuous operation, an operation time, a cycle duration, or any combination thereof for each of the one or more near-infrared light sources. 13. The photobiomodulation therapy garment of any one of embodiments 1-12, wherein the executable instructions independently control each of the one or more sensors. 14. The photobiomodulation therapy garment of embodiment 13, wherein the executable instructions control collection and analysis of the information of each of the one or more sensors. 15. The photobiomodulation therapy garment of any one of embodiments 1-14, wherein the one or more near-infrared light sources are a plurality of spaced apart near-infrared light sources. 16. The photobiomodulation therapy garment of embodiment 15, wherein the plurality of spaced apart near-infrared light source is between 3 and 6 near-infrared light sources. 17. The photobiomodulation therapy garment of embodiment 15 or 16, wherein the plurality of spaced apart near-infrared light sources is arranged in a single row. 18. The photobiomodulation therapy garment of embodiment 15 or 16, wherein the plurality of spaced apart near-infrared light sources is arranged in a plurality of rows. 19. The photobiomodulation therapy garment of embodiment 18, wherein the plurality of rows is between 2 and 6. 20. The photobiomodulation therapy garment of any one of embodiments 15-19, wherein the plurality of spaced apart near-infrared light sources is arranged in a plurality of columns. 21. The photobiomodulation therapy garment of embodiment 20, wherein the plurality of columns is between 2 and 8. 22. The photobiomodulation therapy garment of any one of embodiments 15-17, 20, or 21, wherein the plurality of near-infrared light sources are arranged in a 1×2 array, a 1×3 array, a 1×4 array, a 1×5 array, a 1×6 array, a 1×7 array, a 1×8 array of row to columns. 23. The photobiomodulation therapy garment of any one of embodiments 15-17, 20, or 21, wherein the plurality of near-infrared light sources comprise three near-infrared light sources arranged in a 1×3 array of row to columns. 24. The photobiomodulation therapy garment of any one of embodiments 17, or 20-23, wherein spacing between each of the plurality of near-infrared light sources contained in the single row is between 0.5 cm to 4 cm and the spacing between each of the plurality of near-infrared light sources contained in each of the plurality of columns is between 0.5 cm to 4 cm. 25. The photobiomodulation therapy garment of any one of embodiments 15, 16, 18-21, wherein the plurality of near-infrared light sources are arranged in a 2×2 array, a 2×3 array, a 2×4 array, a 2×5 array, a 2×6 array, a 2×7 array, a 2×8 array, 3×2 array, a 3×3 array, a 3×4 array, a 3×5 array, a 3×6 array, a 3×7 array, or a 3×8 array of rows to columns. 26. The photobiomodulation therapy garment of any one of embodiments 15, 16, 18-21, wherein the plurality of near-infrared light sources comprise six near-infrared light sources arranged in a 2×3 array of rows to columns. 27. The photobiomodulation therapy garment of any one of embodiments 15, 16, 18-21, wherein the plurality of near-infrared light sources comprise six near-infrared light sources arranged with four near-infrared light sources located in a top row and two near-infrared light sources located in a bottom row. 28. The photobiomodulation therapy garment of any one of embodiments 18-21, or 25-27, wherein spacing between each of the plurality of near-infrared light sources contained in each of the plurality of rows is between 0.5 cm to 4 cm and the spacing between each of the plurality of near-infrared light sources contained in each of the plurality of columns is between 0.5 cm to 4 cm. 29. The photobiomodulation therapy garment of any one of embodiments 15-28, wherein the plurality of near-infrared light sources are arranged in a plurality of spaced apart near-infrared light source groups, each of the plurality of near-infrared light source groups comprising a plurality of near-infrared light sources. 30. The photobiomodulation therapy garment of embodiment 29, wherein the plurality of spaced apart near-infrared light source groups is arranged in a single row. 31. The photobiomodulation therapy garment of embodiment 29, wherein the plurality of spaced apart near-infrared light source groups is arranged in a plurality of rows. 32. The photobiomodulation therapy garment of embodiment 31, wherein the plurality of rows is between 2 and 6. 33. The photobiomodulation therapy garment of any one of embodiments 29-32, wherein the plurality of spaced apart near-infrared light source groups is arranged in a plurality of columns. 34. The photobiomodulation therapy garment of embodiment 33, wherein the plurality of columns is between 2 and 8. 35. The photobiomodulation therapy garment of any one of embodiments 29, 30, 33, or 34, wherein the plurality of near-infrared light source groups are arranged in a 1×2 array, a 1×3 array, a 1×4 array, a 1×5 array, a 1×6 array, a 1×7 array, a 1×8 array of row to columns. 36. The photobiomodulation therapy garment of any one of embodiments 30, or 33-35, wherein spacing between each of the plurality of near-infrared light source groups contained in the single row is between 0.5 cm to 4 cm and the spacing between each of the plurality of near-infrared light source groups contained in each of the plurality of columns is between 0.5 cm to 4 cm. 37. The photobiomodulation therapy garment of any one of embodiments 29, 31-34, wherein the plurality of near-infrared light source groups are arranged in a 2×2 array, a 2×3 array, a 2×4 array, a 2×5 array, a 2×6 array, a 2×7 array, a 2×8 array, 3×2 array, a 3×3 array, a 3×4 array, a 3×5 array, a 3×6 array, a 3×7 array, or a 3×8 array of rows to columns. 38. The photobiomodulation therapy garment of any one of embodiments 31-34, or 37, wherein spacing between each of the plurality of near-infrared light source groups contained in each of the plurality of rows is between 0.5 cm to 4 cm and the spacing between each of the plurality of near-infrared light source groups contained in each of the plurality of columns is between 0.5 cm to 4 cm. 39. The photobiomodulation therapy garment of any one of embodiments 29-38, wherein the plurality of spaced apart near-infrared light sources is arranged in a single row. 40. The photobiomodulation therapy garment of any one of embodiments 29-38, wherein the plurality of spaced apart near-infrared light sources is arranged in a plurality of rows. 41. The photobiomodulation therapy garment of embodiment 40, wherein the plurality of rows is between 2 and 6. 42. The photobiomodulation therapy garment of any one of embodiments 39-41, wherein the plurality of spaced apart near-infrared light sources is arranged in a plurality of columns. 43. The photobiomodulation therapy garment of embodiment 42, wherein the plurality of columns is between 2 and 8. 44. The photobiomodulation therapy garment of any one of embodiments 39, 42, or 43, wherein the plurality of near-infrared light sources are arranged in a 1×2 array, a 1×3 array, a 1×4 array, a 1×5 array, a 1×6 array, a 1×7 array, a 1×8 array of row to columns. 45. The photobiomodulation therapy garment of any one of embodiments 39, or 42-44, wherein spacing between each of the plurality of near-infrared light sources contained in the single row is between 1 mm to 4 mm and the spacing between each of the plurality of near-infrared light sources contained in each of the plurality of columns is between 1 mm to 4 mm. 46. The photobiomodulation therapy garment of any one of embodiments 40-43, wherein the plurality of near-infrared light sources are arranged in a 2×2 array, a 2×3 array, a 2×4 array, a 2×5 array, a 2×6 array, a 2×7 array, a 2×8 array, 3×2 array, a 3×3 array, a 3×4 array, a 3×5 array, a 3×6 array, a 3×7 array, or a 3×8 array of rows to columns. 47. The photobiomodulation therapy garment of any one of embodiments 40-43, wherein the plurality of near-infrared light sources comprise nine near-infrared light sources arranged in a 3×3 array of rows to columns. 48. The photobiomodulation therapy garment of any one of embodiments 40-43, 46, or 47, wherein spacing between each of the plurality of near-infrared light sources contained in each of the plurality of rows is between 1 mm to 4 mm and the spacing between each of the plurality of near-infrared light sources contained in each of the plurality of columns is between 1 mm to 4 mm. 49. The photobiomodulation therapy garment of any one of embodiments 1-48 further comprising one or more stimulators. 50. The photobiomodulation therapy garment of embodiment 49, wherein the one or more stimulators include a component that can generate a magnetic field 51. The photobiomodulation therapy garment of any one of embodiments 1-50, wherein the skin surface comprises a forehead site, a posterior cervical site, a carpal site, an abdominal site, or any combination thereof. 52. The photobiomodulation therapy garment of embodiment 51, wherein the forehead site comprises a dorsolateral prefrontal cortex region, a frontal eye fields region, or both. 53. The photobiomodulation therapy garment of embodiment 51, wherein the forehead site comprises an Fp1 site, an Fpz site, an Fp2 site, an F3 site, an Fz site, an F4 site, or any combination thereof. EXAMPLES The following non-limiting examples are provided for illustrative purposes only in order to facilitate a more complete understanding of representative embodiments now contemplated. These examples should not be construed to limit any of the embodiments described in the present specification, including those pertaining to a photobiomodulation therapy garment, or methods and uses disclosed herein. Example 1 Photobiomodulation Therapy Garment In one example arrangement of transcranial photobiomodulation therapy garment20, specifically photobiomodulation therapy headband22, has six infrared light source intergroups arranged in two rows with three intergroups in each row. The infrared light source intergroups are configured on photobiomodulation therapy headband22in a manner where each intergroup at least partially overlays or is substantially centered over sites Fp1300, Fpz302, Fp2304, F35306, Fz308, and F4310. The estimated total area of skin surface S and tissue beneath exposed to the near-infrared light is about 5.3 cm2to about 5.7 cm2and provides a photobiomodulation therapy to dorsolateral prefrontal cortex (dIPFC) and frontal eye fields (FEF). Each infrared light source intergroup has nine LEDs in a 3×3 rectangular array. Each LED has about 55 mW of power, with peak optical output being about 99 mW, and emits infrared light having an average wavelength of 800 nm to about 850 nm and pulse wave of 40 Hz. The average irradiance over the treatment area is about 16 mW/cm2to about 20 mW/cm2, with areas of maximum irradiance potentially up to about 240 mW/cm2to about 365 mW/cm2. The average fluence over the treatment area is about 40 J/cm2to about 45 J/cm2, with areas of maximum fluence potentially up to about 665 J/cm2to about 998 J/cm2. The total energy incident during the treatment session is about 2.0 kJ to about 2.5 kJ. Controller200operates the LEDs continuously (not pulsed) for 10 minutes to 25 minutes. In an alternative configuration, one or more of the six infrared light sources intergroup of photobiomodulation therapy headband22has a combination of both high-powered and low-powered infrared light sources170. For example, the upper left and upper right infrared light source intergroups can have the center infrared light source170of the 3×3 array be a high-powered infrared light source and the remaining infrared light sources170being low-powered infrared light sources. In an alternative configuration, photobiomodulation therapy headband22exhibits an average irradiance over the treatment area is about 31 mW/cm2to about 35 mW/cm2, with areas of maximum irradiance potentially up to about 445 mW/cm2to about 670 mW/cm2. In addition, the average fluence over the treatment area is about 38 J/cm2to about 60 J/cm2, with areas of maximum fluence potentially up to about 665 J/cm2to about 1,005 J/cm2. The total energy incident during the treatment session is about 2.0 kJ to about 5.0 kJ. Example 2 Photobiomodulation Therapy Garment In another example arrangement of transcranial photobiomodulation therapy garment20, specifically photobiomodulation therapy headband22, has three infrared light source intergroups arranged in one row. The infrared light source intergroups are configured on photobiomodulation therapy headband22in a manner where each intergroup at least partially overlays or is substantially centered over sites Fp1300, Fpz302, and Fp2304. The estimated total area of skin surface S and tissue beneath exposed to the near-infrared light is about 2.8 cm2to about 3.3 cm2and provides a photobiomodulation therapy to the frontal eye fields (FEF). Each infrared light source intergroup has nine low powered LEDs in a 3×3 rectangular array. Each infrared light source intergroup has nine LEDs in a 3×3 rectangular array. Each LED has about 55 mW of power, with peak optical output being about 99 mW, and emits infrared light having an average wavelength of 800 nm to about 850 nm and pulse wave of 40 Hz (range 0 Hz to 100 Hz). The average irradiance over the treatment area is about 31 mW/cm2to about 35 mW/cm2, with areas of maximum irradiance potentially up to about 80 mW/cm2to about 105 mW/cm2. The average fluence over the treatment area is about 58 J/cm2to about 63 J/cm2, with areas of maximum fluence potentially up to about 145 J/cm2to about 185 J/cm2. The total energy incident during the treatment session is about 1.2 kJ to about 3.0 kJ. Controller200operates the LEDs continuously (not pulsed) for 10 minutes to 40 minutes. In an alternative configuration, controller200operates the LEDs in a pulsed operation at 40 Hz and 50% duty cycle (variable range being 5% to 100%) for 30 minutes to about 40 minutes. Average irradiance, average areas of maximum irradiance, and average fluence are as described above, with peak irradiance being about 66 mW/cm2to about 67 mW/cm2, peak areas of maximum irradiance potentially up to about 160 mW/cm2to about 205 mW/cm2, and peak fluence over the treatment area maximum fluence being potentially up to about 145 J/cm2to about 185 J/cm2. In an alternative configuration, controller200operates the LEDs in a pulsed operation at 40 Hz and 33% duty cycle (variable range being 5% to 100%) for 30 minutes to about 40 minutes. Average irradiance, average areas of maximum irradiance, and average fluence are as described above, with peak irradiance being about 99 mW/cm2to about 101 mW/cm2, peak areas of maximum irradiance potentially up to about 240 mW/cm2to about 310 mW/cm2, and peak fluence over the treatment area maximum fluence being potentially up to about 145 J/cm2to about 185 J/cm2. The total energy incident during the treatment session is approximately 2.3 kJ. In an alternative configuration, controller200operates the LEDs in a pulsed operation at 10 Hz or 40 Hz and 20% duty cycle (variable range being 5% to 100%) for 30 minutes to about 40minutes. Average irradiance, average areas of maximum irradiance, and average fluence are as described above, with peak irradiance being about 165 mW/cm2to about 167 mW/cm2, peak areas of maximum irradiance potentially up to about 405 mW/cm2to about 510 mW/cm2, and peak fluence over the treatment area maximum fluence being potentially up to about 145 J/cm2to about 185 J/cm2. The total energy incident during the treatment session is approximately 2.3 kJ. Example 3 Photobiomodulation Therapy Garment In another example arrangement of transcranial photobiomodulation therapy garment20, specifically photobiomodulation therapy headband22, has six infrared light source intergroups arranged in two rows with four intergroups in the top row and two intergroups in the row and organized as two inverse triangles. The infrared light source intergroups are configured on photobiomodulation therapy headband22in a manner where one inverse triangle arrangement at least partially overlays or is substantially centered over sites F3306, Fz308, and Fp1300and the other inverse triangle arrangement at least partially overlays or is substantially centered over sites Fz308, F4310, and Fp2304. The estimated total area of skin surface S and tissue beneath exposed to the near-infrared light is about 7.5 cm2to about 9 cm2(each inverse triangle arrangement covering about 3.75 cm2to about 4.5 cm2) and provides a photobiomodulation therapy to the dorsolateral prefrontal cortex (dIPFC). Each infrared light source intergroup has one high powered LED. Each LED has500 mW of power, with peak optical output being 500 mW to 1,000 mW, and emits infrared light having an average wavelength of 800 nm to about 850 nm and pulse wave of between 10 Hz to about 40 Hz (range of 0 Hz to 5,000 Hz). The average irradiance over the treatment area is about 50 mW/cm2to about 300 mW/cm2, with areas of maximum irradiance potentially up to about 500 mW/cm2to about 1,000 mW/cm2. The average fluence over the treatment area is about 40 J/cm2to about 120 J/cm2, with areas of maximum fluence potentially up to about 450 J/cm2to about 1,025 J/cm2. The total energy incident during the treatment session is about 0.4 kJ to about 2.1 kJ. Controller200operates the LEDs in a pulsed operation at between about 10 Hz and about 40 Hz and 20% duty cycle (variable range being 5% to 100%) for 10 minutes to about 40 minutes. In an alternative configuration, each LED has 500 mW of power and emits infrared light having an average wavelength of 960 nm to about 1,100 nm and pulse wave of between 0 Hz to about 100 Hz and potentially up to 5,000 Hz. Example 4 Photobiomodulation Therapy Garment In another example arrangement of transcranial photobiomodulation therapy garment20, specifically photobiomodulation therapy headband22, has five infrared light source intergroups arranged in two rows with three intergroups in the top row and two intergroups in the bottom row and organized in a manner where one of each intergroup is located below one of the outside intergroups from the top row. The infrared light source intergroups are configured on photobiomodulation therapy headband22in a manner where infrared light source intergroups in the top row at least partially overlays or is substantially centered over sites F3306, Fz308, and F4310, one of the intergroups in the bottom row at least partially overlays or is substantially centered over site Fp1300and the other intergroups in the bottom row at least partially overlays or is substantially centered over site Fp2304. The estimated total area of skin surface S and tissue beneath exposed to the near-infrared light is about 7.5 cm2to about 8 cm2and provides a photobiomodulation therapy to the dorsolateral prefrontal cortex (dIPFC) and the frontal eye fields (FEF). Each infrared light source intergroup has one high powered LED. Each LED has 500 mW of power, with peak optical output being 500 mW to 1,000 mW, and emits infrared light having an average wavelength of 800 nm to about 850 nm and pulse wave of between 10 Hz to about 40 Hz (having an adjustable range of 0 Hz to 5,000 Hz). The average irradiance over the treatment area is about 50 mW/cm2to about 300 mW/cm2, with areas of maximum irradiance potentially up to about 500 mW/cm2to about 1,000 mW/cm2. The average fluence over the treatment area is about 40 J/cm2to about 120 J/cm2, with areas of maximum fluence potentially up to about 450 J/cm2to about 1,025 J/cm2. The total energy incident during the treatment session is about 0.4 kJ to about 2.1 kJ. Controller200operates the LEDs in a pulsed operation at between about 10 Hz and about 40 Hz and 20% duty cycle (variable range being 5% to 100%) for 10 minutes to about 40 minutes. In an alternative configuration, each LED has 500 mW of power and emits infrared light having an average wavelength of 960 nm to about 1,100 nm and pulse wave of between 0 Hz to about 100 Hz and potentially up to 5,000 Hz. Example 5 Photobiomodulation Therapy Garment In another example arrangement of transcranial photobiomodulation therapy garment20, specifically photobiomodulation therapy headband22, has three infrared light source intergroups arranged in one row. The infrared light source intergroups are configured on photobiomodulation therapy headband22in a manner where each intergroup at least partially overlays or is substantially centered over sites F3306, Fz308and F4310. The estimated total area of skin surface S and tissue beneath exposed to the near-infrared light is about 2.8 cm2to about 3.3 cm2and provides a photobiomodulation therapy to the dorsolateral prefrontal cortex (dIPFC). Each infrared light source intergroup has one high powered LED. Each LED has 500 mW of power, with peak optical output being 500 mW to 1,000 mW, and emits infrared light having an average wavelength of 800 nm to about 850 nm and pulse wave of between 10 Hz to about 40 Hz. The average irradiance over the treatment area is about 50 mW/cm2to about 300 mW/cm2, with areas of maximum irradiance potentially up to about 500 mW/cm2to about 1,000 mW/cm2. The average fluence over the treatment area is about 6 J/cm2to about 12 J/cm2, with areas of maximum fluence potentially up to about 450 J/cm2to about 1,025 J/cm2. The total energy incident during the treatment session is about 0.15 kJ to about 1.8 kJ. Controller200operates the LEDs in a pulsed operation at between about 10 Hz and about 40 Hz and 20% duty cycle (variable range being 5% to 100%) for 10 minutes to about 40 minutes. In an alternative configuration, each LED has 500 mW of power and emits infrared light having an average wavelength of 860 nm to about 1,100 nm and pulse wave of between 0 Hz to about 100 Hz and potentially up to 5,000 Hz. Example 6 tPBM Treatment Increases Functional Conductivity of Neurons A research study was conducted to assess the neuronal conductivity effects of a transcranial photobiomodulation (tPBM) treatment using a photobiomodulation therapy garment disclosed herein. Each participant underwent an EEG analysis for 8 minutes before a tPBM treatment in order to establish a baseline. Each participant was then administered tPBM treatments using a photobiomodulation therapy garment disclosed herein. Each tPBM treatment was bilateral and applied to the frontal areas with two application sites on the left side, two on the right side and two on the midline [left, right and center forehead on the frontal EEG sites on F3, Fpl, F4, Fp2 and Fz, Fpz]. Once accurate placement is ensured, a tPBM treatment was initiated by a button press on a specific phone application to activate the probes delivering the LED light. The duration of irradiation was 40 min per treatment. The tPBM treatment followed these specifications: the energy was administered with a radiation wavelength of 850 nm, the irradiance (IR) was 18 mW/cm2; the fluence was up to 43 Joules/cm2; the energy delivered per session was up to 2.4 kJ; and each treatment window area was 55 cm2. After completion of the tPBM treatment, each participant underwent a second EEG analysis for 8 minutes. The results of this research study showed that participants exhibiting increased functional connectivity of their neurons (as measured with EEG activity) compared to sham. For example,FIGS.17A-Cshows the results of five (5) participants. Scans of the EEG analysis conducted after the tPPB treatment exhibit focused points of light (FIG.17B) as compared to scans taken before the tPBM treatment (FIG.17A). These differences are further underscored byFIG.17C, which illustrates the focused light of the before and after scans. These findings is indicative of improved connections between neurons. Increased functional connectivity allow neurons to transmit information faster and more accurately. The results were reproducible and not evident with the sham. Example 7 tPBM Treatment Increases Brain Activity A research study was conducted to assess brain activity effects of a transcranial photobiomodulation (tPBM) treatment using a photobiomodulation therapy garment disclosed herein. Each participant underwent an EEG analysis for 8 minutes before a tPBM treatment in order to establish a baseline. Each participant was then administered tPBM treatments using a photobiomodulation therapy garment disclosed herein. Each tPBM treatment was bilateral and applied to the frontal areas with two application sites on the left side, two on the right side and two on the midline [left, right and center forehead on the frontal EEG sites on F3, Fpl, F4, Fp2 and Fz, Fpz]. Once accurate placement is ensured, a tPBM treatment was initiated by a button press on a specific phone application to activate the probes delivering the LED light. The duration of irradiation was 40 min per treatment. The tPBM treatment followed these specifications: the energy was administered with a radiation wavelength of 850 nm, the irradiance (IR) was 18 mW/cm2; the fluence was up to 43 Joules/cm2; the energy delivered per session was up to 2.4 kJ; and each treatment window area was 55 cm2. After completion of the tPBM treatment, each participant underwent a second EEG analysis for 8 minutes. The results of this research study showed that participants exhibited increased brain gamma oscillation increased during a 40 Hz pulse wave therapy compared to sham. For example,FIGS.18A-18Bshows a representative result from one participant. As shown inFIG.18Aby the shaded block, there was a significant increase of over 35% in brain gamma oscillations at a frequency of 40 Hz. In addition, as shown inFIG.18B, there is a significant peak of gamma power between about 250 seconds to about about 350 seconds, at which point gamma power levels decline but are maintained at a higher level as compared to baseline gamma power levels. These findings are indicative of brain gamma wave stimulation which underlie many cognitive operations including perception. Increased brain gamma wave stimulation promotes brain activity to transmit information faster and more accurately. The results were reproducible and not evident with the sham. Example 8 tPBM Treatment for Depression in Adults An 8-week open-label pilot clinical study was conducted to assess the safety, and efficacy of a tPBM treatment using a photobiomodulation therapy garment disclosed herein in adults with active depressive symptoms. The study enrolled 19 participants clinically diagnosed with moderate to severe depressive symptoms according to the Beck's Depressive Inventory (BDI, baseline score of 25). Participants were administered tPBM treatments twice daily at home for 8 weeks using a photobiomodulation therapy garment disclosed herein. Each tPBM treatment was bilateral and applied to the frontal areas with two application sites on the left side, two on the right side and two on the midline [left, right and center forehead on the frontal EEG sites on F3, Fpl, F4, Fp2 and Fz, Fpz]. Once accurate placement is ensured, a tPBM treatment was initiated by a button press on a specific phone application to activate the probes delivering the LED light. The duration of irradiation was 40 min per treatment. The tPBM treatment will follow these specifications: the energy will be administered with a radiation wavelength of 850 nm, the irradiance (IR) will be 18 mW/cm2; the fluence will be up to 43 Joules/cm2; the energy delivered per session will be up to 2.4 kJ; and each treatment window area will be 55 cm2. At the end of the 8-week clinical study of tPBM treatment of depression using a photobiomodulation therapy garment disclosed herein, investigators detected a significant reduction in depressive symptoms among participants. For example, participants experienced a 43% decrease in depressive symptoms at week 8 as assessed by the Beck's Depression Inventory. The finding was a statistically significant change from baseline (significance p=0.001). Interestingly, the improvement was maintained for at least 4 weeks after stopping the tPBM treatment. In fact, at week 12 the investigators still detected an average decrease of 48% in depressive symptoms, compared to baseline, as assessed by the Beck's Depression Inventory. The finding was also a significant change from baseline (significance p<0.0001). Subsequent analyses revealed that the improvements in depression were at least partially explained by improvement in sleep quality. Example 9 tPBM Treatment for Pediatric Depression An 8-week open-label pilot clinical study will be conducted to assess the safety, and efficacy of a tPBM treatment using a photobiomodulation therapy garment disclosed herein in children with active depressive symptoms as assessed through the Child Behavior Checklist (CBCL). The study will enroll 20-30 participants, ages 6 to 17 years, who currently experience a CBCL T score of 60 or higher on the Anxious/Depressed scale. Each participant will be clinically assessed by completing a series of clinical intake questionnaires and scales, including 1) CBCL, a parent-report questionnaire that evaluates maladaptive behavioral and emotional problems, both internalizing and externalizing, in children ages 6-18; 2) the Pediatric Quality Of Life Enjoyment and Satisfaction Questionnaire (PQ-LES-Q), a 15 question parent-report form designed to help assess the degree of enjoyment and satisfaction the child is experiencing during the past week; 3) the Behavior Rating Inventory of Executive Function-Parent Report (BRIEF-P), a 78-item rating scale to assess level of executive function deficits; and 4) the Social Responsiveness Scale (SRS), a 65-item rating scale completed by the parent used to measure social deficits as they occur in natural settings. Participants will be administered daily tPBM treatments for 8 weeks. tPBM treatment will use a photobiomodulation therapy garment disclosed herein will be bilateral and applied to the frontal areas with two application sites on the left side, two on the right side and two on the midline [left, right and center forehead on the frontal EEG sites on F3, Fpl, F4, Fp2 and Fz, Fpz]. Once accurate placement is ensured, a tPBM treatment will be initiated by a button press on a specific phone application to activate the probes delivering the LED light. The duration of irradiation will start at 10 min per treatment for the first week (days 1-7), increase to 20 min per treatment during the second week of treatment (days 7-14) and to 30 min per treatment at week 3 (days 14-21) of treatment. If side-effects prevent increase (or if treatment response already occurred), a lower dose will be kept in order to ensure good tolerability and treatment adherence. At day 21, the clinician will recommend 40 min daily treatment if no improvement in the context of good tolerability. The tPBM treatment will follow these specifications: the energy will be administered with a radiation wavelength of 850 nm, the irradiance (IR) will be 18 mW/cm2; the fluence will be up to 43 Joules/cm2; the energy delivered per session will be up to 2.4 kJ; and each treatment window area will be 55 cm2. Subjects will be evaluated at weekly intervals for the first four weeks, and biweekly thereafter. At each visit, measures of safety and efficacy will be obtained using assessments of psychiatric symptoms and functioning and measures of adverse effects. At the midpoint (end of week 4) and final study visits (week 8 or Endpoint), additional clinician-and subject-rated assessments will be completed. Response to treatment will be assessed by the following assessment measures 1) a Clinician completed Depression Specific Clinical Global Impression (CGI-Depression), including Clinical Global Severity (CGI-S), Clinical Global Improvement (CGI-I), and the CGI-Efficacy Index (CGI-EI) Scale, will be completed by the physician at every visit; 2) an Affective Reactivity Index-Parent Report (ARI-P), a concise, 7 question parent-report form assessing irritability and temper, will be completed by the parent at week 0 (baseline), week 4 and week 8; 3) a Childhood Anxiety Sensitivity Index (CASI-Anx), a 38-item scale that assesses symptoms of anxiety, will be completed by the parent at week 0 (baseline), week 4 and week 8; and 4) a Children's Depression Inventory (CDI), a 27-item scale that assesses symptoms of depression, will be completed by the parent at week 0 (baseline), week 4 and week 8. The results are expected to show that a tPBM treatment will be safe and effective in reducing symptoms of pediatric depression. Example 10 tPBM Treatment of Autistic Traits in Children with Attention Deficit Hyperactivity Disorder (ADHD) A 10-week open-label pilot clinical study will be conducted to assess the tolerability, safety, and efficacy of a tPBM treatment using a photobiomodulation therapy garment disclosed herein in children diagnosed with ADHD who also present with at least moderate level of autistic traits. The study will enroll 90-100 participants, ages 9 to 17 years, who fulfill the DSM-5 diagnostic criteria for ADHD and present with moderately severe autistic spectrum disorder symptoms as established by a Social Responsiveness Scale, 2ndEdition (SRS-2) raw score of 75 or higher or a Clinical Global Impressions—Autistic Traits (CGI-AT) severity score of 4 or higher. Each participant will be clinically assessed by a board-certified clinician for ADHD and autism traits and all participant's parent/guardian will be administered an assessment battery including a brief demographic interview and the Autism Trait Specific Clinical Global Impression (CGI-AT), including Clinical Global Severity (CGI-S), Clinical Global Improvement (CGI-I), and the CGI-Efficacy Index (CGI-EI) Scale, the Behavior Rating Inventory of Executive Function-Parent Version (BRIEF-P), the Child Behavior Checklist (CBCL), the Clinician-Rated Treatment Emergent Adverse Events Log (CTAE), the Global Assessment of Functioning Scale (GAF), the Massachusetts General Hospital Social-Emotional Competence Scale (MGH-SECS) questionaries including MGH-SECS-Informant Rated (MGH-SECS-I) and MGH-SECS Clinician Rated (MGH-SECS-C), the MGH Autism Spectrum Disorder DSM-5 Diagnostic Symptom Checklist (MGH-ASD-SCL), and the SRR-2 questionnaires. Participants will be administered daily tPBM treatments for 8 weeks and a post-study follow-up will occur at week 10. tPBM treatment will use a photobiomodulation therapy garment disclosed herein will be bilateral and applied to the frontal areas with two application sites on the left side, two on the right side and two on the midline [left, right and center forehead on the frontal EEG sites on F3, Fpl, F4, Fp2 and Fz, Fpz]. Once accurate placement is ensured, a tPBM treatment will be initiated by a button press on a specific phone application to activate the probes delivering the LED light. The duration of irradiation will start at 10 min per treatment for the first week (days 1-7), increase to 20 min per treatment during the second week of treatment (days 7-14) and to 30 min per treatment at week 3 (days 14-21) of treatment. If side-effects prevent increase (or if treatment response already occurred), a lower dose will be kept in order to ensure good tolerability and treatment adherence. At day 21, the clinician will recommend 40 min daily treatment if no improvement in the context of good tolerability. The tPBM treatment will follow these specifications: the energy will be administered with a radiation wavelength of 850 nm, the irradiance (IR) will be 18 mW/cm2; the fluence will be up to 43 Joules/cm2; the energy delivered per session will be up to 2.4 kJ; and each treatment window area will be 55 cm2. Subjects will be evaluated at weekly intervals for the first four weeks, and biweekly thereafter. At each visit, measures of safety and efficacy will be obtained using assessments of psychiatric symptoms and functioning and measures of adverse effects. At the midpoint (end of week 4) and final study visits (week 8 or Endpoint), additional clinician-and subject-rated assessments will be completed. Response to treatment will be assessed by the following assessment measures 1) an CGI-AT, including CGI-S, CGI-I, and CGI-El Scale, will be completed by the physician at weeks 0 (baseline), 1, 2, 3, 4, 6, and 8; 2) a GAF and CTAE will be completed by the physician at weeks 0 (baseline), 1, 2, 3, 4, 6, and 8; 3) an Attention Deficit Hyperactivity Disorder Symptom Checklist (ADHD-SC) will be completed by the physician at weeks 0 (baseline), 4, and 8; 4) a tPBM Self-Report Questionnaire (TSRQ) will be completed by the parent/guardian at weeks 1, 2, 3, 4, 6, and 8; 5) a SRS-2 and CBCL will be completed by the physician at weeks 4 and 8; 5) a BRIEF-P and MGH-SECS-I will be completed by the parent/guardian at week 8; and 6) a MGH-SECS-C will be completed by the physician at week 8. At week 10, each participant will be assessed by CGI-AT, including CGI-S, CGI-I, and CGI-EI Scale, GAF, CTAE, SRS-1, ADHD-SC, and TSRQ, The results are expected to show that a tPBM treatment will be safe and effective in reducing autistic traits in children diagnosed with ADHD. In closing, foregoing descriptions of embodiments of the present invention have been presented for the purposes of illustration and description. It is to be understood that, although aspects of the present invention are highlighted by referring to specific embodiments, one skilled in the art will readily appreciate that these described embodiments are only illustrative of the principles comprising the present invention. As such, the specific embodiments are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Therefore, it should be understood that embodiments of the disclosed subject matter are in no way limited to a particular element, compound, composition, component, article, apparatus, methodology, use, protocol, step, and/or limitation described herein, unless expressly stated as such. In addition, groupings of alternative embodiments, elements, steps and/or limitations of the present invention are not to be construed as limitations. Each such grouping may be referred to and claimed individually or in any combination with other groupings disclosed herein. It is anticipated that one or more alternative embodiments, elements, steps and/or limitations of a grouping may be included in, or deleted from, the grouping for reasons of convenience and/or patentability. When any such inclusion or deletion occurs, the specification is deemed to contain the grouping as modified, thus fulfilling the written description of all Markush groups used in the appended claims. Furthermore, those of ordinary skill in the art will recognize that certain changes, modifications, permutations, alterations, additions, subtractions and sub-combinations thereof can be made in accordance with the teachings herein without departing from the spirit of the present invention. Furthermore, it is intended that the following appended claims and claims hereafter introduced are interpreted to include all such changes, modifications, permutations, alterations, additions, subtractions and sub-combinations as are within their true spirit and scope. Accordingly, the scope of the present invention is not to be limited to that precisely as shown and described by this specification. Certain embodiments of the present invention are described herein, including the best mode known to the inventors for carrying out the invention. Of course, variations on these described embodiments will become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventor expects skilled artisans to employ such variations as appropriate, and the inventors intend for the present invention to be practiced otherwise than specifically described herein. Accordingly, this invention includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described embodiments in all possible variations thereof is encompassed by the invention unless otherwise indicated herein or otherwise clearly contradicted by context. The words, language, and terminology used in this specification is for the purpose of describing particular embodiments, elements, steps and/or limitations only and is not intended to limit the scope of the present invention, which is defined solely by the claims. In addition, such words, language, and terminology are to be understood not only in the sense of their commonly defined meanings, but to include by special definition in this specification structure, material or acts beyond the scope of the commonly defined meanings. Thus, if an element, step or limitation can be understood in the context of this specification as including more than one meaning, then its use in a claim must be understood as being generic to all possible meanings supported by the specification and by the word itself. The definitions and meanings of the elements, steps or limitations recited in a claim set forth below are, therefore, defined in this specification to include not only the combination of elements, steps or limitations which are literally set forth, but all equivalent structure, material or acts for performing substantially the same function in substantially the same way to obtain substantially the same result. In this sense it is therefore contemplated that an equivalent substitution of two or more elements, steps or limitations may be made for any one of the elements, steps or limitations in a claim set forth below or that a single element, step or limitation may be substituted for two or more elements, steps or limitations in such a claim. Although elements, steps or limitations may be described above as acting in certain combinations and even initially claimed as such, it is to be expressly understood that one or more elements, steps or limitations from a claimed combination can in some cases be excised from the combination and that the claimed combination may be directed to a sub-combination or variation of a sub-combination. As such, notwithstanding the fact that the elements, steps and/or limitations of a claim are set forth below in a certain combination, it must be expressly understood that the invention includes other combinations of fewer, more or different elements, steps and/or limitations, which are disclosed in above even when not initially claimed in such combinations. Furthermore, insubstantial changes from the claimed subject matter as viewed by a person with ordinary skill in the art, now known or later devised, are expressly contemplated as being equivalently within the scope of the claims. Therefore, obvious substitutions now or later known to one with ordinary skill in the art are defined to be within the scope of the defined elements. Accordingly, the claims are thus to be understood to include what is specifically illustrated and described above, what is conceptually equivalent, what can be obviously substituted and also what essentially incorporates the essential idea of the invention. Unless otherwise indicated, all numbers expressing a characteristic, item, quantity, parameter, property, term, and so forth used in the present specification and claims are to be understood as being modified in all instances by the term “about.” As used herein, the term “about” means that the characteristic, item, quantity, parameter, property, or term so qualified encompasses a range of plus or minus ten percent above and below the value of the stated characteristic, item, quantity, parameter, property, or term. Accordingly, unless indicated to the contrary, the numerical parameters set forth in the specification and attached claims are approximations that may vary. For instance, as mass spectrometry instruments can vary slightly in determining the mass of a given analyte, the term “about” in the context of the mass of an ion or the mass/charge ratio of an ion refers to +/−0.50 atomic mass unit. At the very least, and not as an attempt to limit the application of the doctrine of equivalents to the scope of the claims, each numerical indication should at least be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and values setting forth the broad scope of the invention are approximations, the numerical ranges and values set forth in the specific examples are reported as precisely as possible. Any numerical range or value, however, inherently contains certain errors necessarily resulting from the standard deviation found in their respective testing measurements. Recitation of numerical ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate numerical value falling within the range. Unless otherwise indicated herein, each individual value of a numerical range is incorporated into the present specification as if it were individually recited herein. Use of the terms “may” or “can” in reference to an embodiment or aspect of an embodiment also carries with it the alternative meaning of “may not” or “cannot.” As such, if the present specification discloses that an embodiment or an aspect of an embodiment may be or can be included as part of the inventive subject matter, then the negative limitation or exclusionary proviso is also explicitly meant, meaning that an embodiment or an aspect of an embodiment may not be or cannot be included as part of the inventive subject matter. In a similar manner, use of the term “optionally” in reference to an embodiment or aspect of an embodiment means that such embodiment or aspect of the embodiment may be included as part of the inventive subject matter or may not be included as part of the inventive subject matter. Whether such a negative limitation or exclusionary proviso applies will be based on whether the negative limitation or exclusionary proviso is recited in the claimed subject matter. The terms “a,” “an,” “the” and similar references used in the context of describing the present invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. Further, ordinal indicators—such as, e.g., “first,” “second,” “third,” etc.—for identified elements are used to distinguish between the elements, and do not indicate or imply a required or limited number of such elements, and do not indicate a particular position or order of such elements unless otherwise specifically stated. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples or exemplary language (e.g., “such as”) provided herein is intended merely to better illuminate the present invention and does not pose a limitation on the scope of the invention otherwise claimed. No language in the present specification should be construed as indicating any non-claimed element essential to the practice of the invention. When used in the claims, whether as filed or added per amendment, the open-ended transitional term “comprising”, variations thereof such as, e.g., “comprise” and “comprises”, and equivalent open-ended transitional phrases thereof like “including,” “containing” and “having”, encompass all the expressly recited elements, limitations, steps, integers, and/or features alone or in combination with unrecited subject matter; the named elements, limitations, steps, integers, and/or features are essential, but other unnamed elements, limitations, steps, integers, and/or features may be added and still form a construct within the scope of the claim. Specific embodiments disclosed herein may be further limited in the claims using the closed-ended transitional phrases “consisting of” or “consisting essentially of” (or variations thereof such as, e.g., “consist of”, “consists of”, “consist essentially of”, and “consists essentially of”) in lieu of or as an amendment for “comprising.” When used in the claims, whether as filed or added per amendment, the closed-ended transitional phrase “consisting of” excludes any element, limitation, step, integer, or feature not expressly recited in the claims. The closed-ended transitional phrase “consisting essentially of” limits the scope of a claim to the expressly recited elements, limitations, steps, integers, and/or features and any other elements, limitations, steps, integers, and/or features that do not materially affect the basic and novel characteristic(s) of the claimed subject matter. Thus, the meaning of the open-ended transitional phrase “comprising” is being defined as encompassing all the specifically recited elements, limitations, steps and/or features as well as any optional, additional unspecified ones. The meaning of the closed-ended transitional phrase “consisting of” is being defined as only including those elements, limitations, steps, integers, and/or features specifically recited in the claim, whereas the meaning of the closed-ended transitional phrase “consisting essentially of” is being defined as only including those elements, limitations, steps, integers, and/or features specifically recited in the claim and those elements, limitations, steps, integers, and/or features that do not materially affect the basic and novel characteristic(s) of the claimed subject matter. Therefore, the open-ended transitional phrase “comprising” (and equivalent open-ended transitional phrases thereof) includes within its meaning, as a limiting case, claimed subject matter specified by the closed-ended transitional phrases “consisting of” or “consisting essentially of.” As such, the embodiments described herein or so claimed with the phrase “comprising” expressly and unambiguously provide description, enablement, and support for the phrases “consisting essentially of” and “consisting of.” Lastly, all patents, patent publications, and other references cited and identified in the present specification are individually and expressly incorporated herein by reference in their entirety for the purpose of describing and disclosing, for example, the compositions and methodologies described in such publications that might be used in connection with the present invention. These publications are provided solely for their disclosure prior to the filing date of the present application. Nothing in this regard is or should be construed as an admission that the inventors are not entitled to antedate such disclosure by virtue of prior invention or for any other reason. All statements as to the date or representation as to the contents of these documents are based on the information available to the applicant and do not constitute any admission as to the correctness of the dates or contents of these documents.
193,415
11857801
DETAILED DESCRIPTION The various embodiments disclosed or contemplated herein relate to devices and systems for delivering near infrared light to the head and brain of concussion victims. As shown inFIG.1, the system10according to one embodiment has a light delivery device12, a power source14, and a controller16. These three components are coupled to each other as shown via a power/communication line18. More specifically, the power/communication line18couples the power source14to the controller16, and the line18further couples the controller16to the light delivery device12. Alternatively, the power/communication line18can constitute two separate lines, with one coupling the power source to the controller16and the other coupling the controller16to the light delivery device12. According to one embodiment, the power/communication line18is a cable18. Alternatively, the power/communication line18can be any known elongate line for transmitting both energy and electronic communication. In this embodiment, the controller16can be adjusted variably to deliver the required amount of power, and can be set to a timer to allow the power to be reduced or eliminated after a set, desired amount of time. The modulated power from the controller16is delivered to light delivery device12that is typically positioned on a patient's head. In one embodiment, the controller16is a rheostat16. Alternatively, the controller16can be any type of processor or computer16that can be used to control the various components of the system10and can have software (or have access to software) that can provide various processes and/or applications that provide additional control features to the controller16. The device12has LED lights (not shown) that convert the energy into therapeutic near infrared radiation, as will be described in additional detail below. The electronic circuits within the device12for powering and controlling the LEDs (not shown) are not shown here but are apparent to one of ordinary skill in the art. According to one embodiment, the energy source14is line voltage available via an outlet. Alternatively, the energy source14is a battery (or batteries). In a further alternative, the energy source14can be any known energy source for providing energy to a system such as described herein. The light delivery device12, in accordance with the specific implementation as shown inFIG.1, is a helmet12that is disposed on the patient's head. Alternatively, the light delivery device12is a flexible headcover. Specific embodiments of these types of light delivery devices will be described in further detail below. In a further alternative, the light delivery device12can be any known device that can be disposed on or over the patient's head (also referred to as “headgear,” a “headpiece,” and a “head covering”) and can contain the array of lights according to the various embodiments herein such that light treatment can be applied to the patient's head. The lights, according to one implementation, are LED lights. Various other embodiments discussed herein include LED lights. Alternatively, the lights in any of the various implementations disclosed or contemplated herein can be any known type of lights for use in a device for irradiating a patient in a fashion similar to the various device embodiments herein. It is understood that any of these various types of light delivery devices described herein can be incorporated into any of the embodiments disclosed or contemplated elsewhere herein. FIG.2shows another embodiment of a therapeutic system30. In this implementation, in addition to the light delivery device32, the power source34, and the controller36coupled to each other as shown via a power/communication line38, the system30also has a light measurement device40such as, for example, a spectrometer40, that is disposed within the light delivery device32and positioned against or otherwise in contact with or adjacent to the patient's scalp. It is understood that the components (such as the light delivery device32, the power source, the controller36, and the power/communication line38) in this embodiment having equivalent components in the system10described above are substantially similar to those equivalent components in the system10as described above. The light measurement device40, which is, in this example, a near infrared spectrometer40, mentioned above is disposed within the headgear32to measure the irradiance delivered at the skin of the patient by the light array (not shown) in the headgear32. In one implementation, the spectrometer40is coupled to an energy/communication line42that can transfer energy and/or communication therein. According to one embodiment, the output from the spectrometer40can be (1) transmitted by a signal output cable44to a processor46that can convert the output to human-readable form (via, for example, an interface or display on the processor46, for example), or (2) transmitted to a microprocessor within the controller36to modulate the output of the controller36to maintain a desired irradiance at the wearer's scalp by the LED light array (not shown) in the headgear32. The control algorithm to modulate the output of the controller36in response to data from the spectrometer40is not shown here but would be apparent to one of ordinary skill in the art. FIG.3shows a cross-sectional top view of one embodiment of the therapeutic headgear60, withFIG.4depicting one example of the skull80of a patient on which the headgear60might be positioned. The headgear60has an array of lights62,64,66disposed on the inner surface of the wall68of the headgear60to deliver near infrared radiation to the patient's head70. The lights62,64,66are selected to ensure uniform radiation around the entire head70. According to one implementation, the lights62,64,66are LED lights of specific intensities that are selected to ensure uniform radiation around the patient's head70. For example, in this specific embodiment, over the front sinuses72and mastoid processes74inside the cranium wall76, the lights are LED lights64that are selected to have anywhere between 15% and 50% more power or radiant intensity than LED lights62. Similarly, LED lights66over the sphenoid bones86(as best shown in the exemplary skull inFIG.4) have less power than the LED lights62. FIG.4, as mentioned above, shows an exemplary human skull80provided herein to show specific anatomic details relevant to various aspects of certain embodiments. Of particular note are the frontal sinuses82and the mastoid processes84. These features are air pockets82,84within the skull80in which there is a transition from bone, to air, to bone again. These transitions between media with different refractive indices can cause of transmission loss of incident radiation due to scattering, reducing radiation transmission to the brain. Similarly, the lateral part of the frontal skull86, the posterior parietal region of the parietal skull88and the sphenoid bone90are all regions where the bone is especially thick86,88or particularly thin90. To ensure that incident radiation is uniform at the surface of the brain (after it passes through the skull80), certain treatment device embodiments herein are configured as described to compensate for these variations in transmissibility of the skull80to incident radiation. In any of the various implementations set forth herein, including the systems embodiments10and30set forth inFIGS.1and2and discussed above and any of the other systems and devices (including the various headgear and light assembly embodiments discussed in detail below) disclosed or contemplated herein, the application of the light to the patient's head (such as the head70discussed above, for example) can be varied based on intensity or duration of the light. That is, the fluence can be varied to achieve the desired output of each light in any system embodiment disclosed or contemplated herein. “Fluence” is the total output of a light source and is calculated as irradiance multiplied by the time it is applied. In the various systems and devices herein, fluence of any light or group of lights can be modulated by either changing the level of irradiance or the period of time over which the irradiance is applied. Thus, to increase fluence applied to a given part of the scalp by one or more lights, one may increase the time during which the irradiance is applied by that one or more lights. Fluence increases linearly with increase in time, while irradiance remains constant. According to the various embodiments herein, the amount of fluence of a light or array of lights of any device disclosed or contemplated herein that penetrates to the cortex of the patient (referred to herein as “therapeutic fluence”) is a function of the amount of fluence that is applied to the scalp (referred to herein as “applied fluence”) and an inverse function of the patient's cranial thickness. As discussed elsewhere herein, cranial thickness varies between patients, thereby resulting in variations in the amount of light penetration through those patients' skulls. As a result, according to various implementations herein, the various systems and devices disclosed or contemplated herein can provide for tailoring the light therapy to each individual patient by adjusting the applied fluence based on measurements of the patient's skull to control the amount of therapeutic fluence delivered to that patient's brain. One exemplary method of using the various system and device embodiments herein to provide tailored light application to a patient's brain is provided as follows. In this exemplary implementation, after diagnosis of a concussion in the patient, the physician then performs (or accesses from a previous, unrelated exam) measurements of the patient's cranium to determine its thickness. Measurement methods include any known method for measuring a patient's cranium, including, for example, use of ultrasound with the transducer applied to the scalp at one or various positions around the head, use of a computerized tomagraph which allows measurement of cranial thickness simultaneously at multiple positions on the head, and use of magnetic resonance imaging, which also allows for simultaneous cranial thickness measurement at multiple positions around the head. Once the patient's cranial thickness measurements are collected, those measurements can be used according to various embodiments of the devices or systems herein to tailor the level of applied fluence in order to arrive at the desired level of therapeutic fluence for the patient. More specifically, the applied fluence can be adjusted based on a reference irradiance and time for a reference cranial thickness. In certain embodiments, the adjustment is carried out by the physician adjusting physical controls (power and timing) on the system/device based on consultation with the reference thickness and reference therapeutic fluence level, which can be provided to the physician in a hardcopy manual, an electronic manual, an electronic app, or any other format or medium. Alternatively, any of the system or device embodiments herein can have software that contains the reference cranial thickness and reference therapeutic fluence level such that the software provides for automatic adjustment of the applied fluence based on the reference information to arrive at the correct level of therapeutic fluence. More specifically, in those implementations in which the physician manually adjusts the system/device, the physician can adjust either the irradiance or time of light application by the device/system to increase the applied fluence delivered if the patient's cranial thickness is greater than the reference value, or to decrease the irradiance or time of light application to decrease the applied fluence if the patient's cranial thickness is smaller than the reference value. The increase or decrease of the therapy irradiance or time compared to the reference values would be proportional to the relative increase or decrease of the patient's cranial thickness compared to the reference cranial thickness. In certain implementations, the adjustment of the applied fluence can be done as an average for the entire cranium or, alternatively, the adjustment can be done by zones of lights disposed adjacent to target zones of the patient's head, including, for example, the frontal sinus, frontal, parietal, crown, occipital, mastoid, and temporal zones. In accordance with those embodiments in which any of the device or system embodiments herein includes software that utilizes reference cranial thicknesses and corresponding reference therapeutic fluence levels, the physician can, in certain of those embodiments, enter the cranial measurements into the computer interface and the software automatically compares the entered cranial thickness measurements against reference values and increase or decrease the irradiance or time compared to the reference values to increase or decrease the applied fluence in proportion to the increase or decrease of the patient's cranial thickness with respect to the reference values. According to certain implementations, the software can provide options for the physician to choose either adjustment in power, time of therapy, or both. As such, the software would allow for the device/system to adjust the applied fluence delivered to multiple zones around the head, thereby tailoring the therapy to the patient. FIGS.5and6show one embodiment of a light delivery device100, which in this specific implementation is a therapeutic headgear100. The headgear100fits around the concussion victim's head and is, in one implementation, attached to a controller (such as, for example, one of the controllers16,36discussed above) via an energy/communication line (such as, for example, the lines18,38described above). It is understood that the headgear100can be incorporated into any of the system embodiments disclosed or contemplated herein, and can incorporate any of the features and/or components of any of the embodiments disclosed or contemplated herein. The headgear100can have a material (such as a known foam or other known material used for the interior of a motorcycle helmet or other similar headgear) disposed on the inner wall or surface of the headgear100that can be shaped and structured to form to a user's head. Further, the therapeutic headgear100has an LED light array102disposed or otherwise arranged on the inner surface or wall of the headgear100. In one implementation, the lights of the light array102are arranged in a configuration similar to the LED lights62,64,66inFIG.3, with higher and lower intensity LED lights62,64,66as described above. In accordance with certain exemplary embodiments, including the headgear100, to make donning the headgear100(or otherwise positioning the headgear100over a patient's head) simpler, the headgear100has movable, hinged gullwings (or “panels”)104that move between a closed position and an open position. The closed position, according to one implementation, is depictedFIG.5.FIG.6, in contrast, depicts the gull wing panels104in the open position for donning before use. There is also an array of LED lights102distributed on the inner surface or wall of the gullwing panels104in this implementation. As discussed above, this array102can be arranged as disclosed with respect toFIG.3. In use, the panels104can be placed in their open position or configuration prior to positioning the headgear100on the patient's head. Once the headgear100is positioned as desired, the panels104can be moved into their closed position or configuration prior to treatment. Alternatively, the light delivery device100, and any similar device herein, can have no moveable panels. FIG.7shows another embodiment of a light delivery device120that is a therapeutic headgear120. In this embodiment, in addition to the moveable side panels122that operate in substantially the same way and have substantially the same features as the panels104discussed above, the headgear120has a moveable, hinged rear gullwing panel124. Like the side panels122, the rear panel124has an array of LED lights126distributed on the inner surface or wall thereof in this implementation. As discussed above, this array126can be arranged as disclosed with respect toFIG.3. Further, like the headgear100, the headgear120has an LED light array126disposed or otherwise arranged on the inner surface or wall of the headgear120, which can also be arranged in a fashion similar to the configuration ofFIG.3. The rear panel124is moveable between a closed position (not shown) and an open position as shown inFIG.7. In one exemplary implementation, the moveable rear panel124further simplifies or provides even greater ease in donning the headgear120or otherwise positioning the headgear120on a patient's head. In use, the side panels122and the rear panel124can be placed in their open positions or configurations prior to positioning the headgear120on the patient's head. Once the headgear120is positioned as desired, the panels122,124can be moved into their closed position or configuration prior to treatment. FIG.8shows yet another embodiment of a light delivery device140that is a therapeutic headgear140. In this embodiment, the wall or body material142of the headgear is an elastic, stretchable material that can be stretched to fit over the user's head. For example, in one embodiment, the headgear140fits over the patient's head in a fashion similar to a snugly fitting stocking cap or other such head covering. The material can be any known stretchable material that can be incorporated into headgear such as this. Further, like the headgear100,120, the headgear140has an LED light array144disposed or otherwise arranged on the inner surface or wall of the headgear140, which can also, in certain embodiments, be arranged in a fashion similar to the configuration ofFIG.3. FIG.9depicts a further implementation of a light delivery device160that is a therapeutic headgear160. This headgear embodiment160has movable, hinged lateral gullwing panels162substantially similar to the side panels104,122discussed above. Further, like the headgear100,120,140, the headgear160has an LED light array164disposed or otherwise arranged on the inner surface or wall of the headgear160(including the panels162), which can also, in certain embodiments, be arranged in a fashion similar to the configuration ofFIG.3. In this specific embodiment, the headgear160also has a visor166disposed on front portion of the headgear160that will generally be positioned in front of the eyes of the patient when the headgear160is disposed on the patient's head. As such, the visor166can block or reduce the external or ambient light reaching the patient's eyes. In one embodiment, the visor166is substantially opaque and allows essentially no light to the patient's eyes. Further, the headgear160in this specific implementation has ear covers168disposed on a lower portion of the panels162such that the covers168are positioned adjacent to (and typically in contact with) the patient's ears, thereby reducing or blocking any external or ambient sound from reaching the patient's ears. It is understood that the visor and/or the ear covers can be incorporated into any of the headgear embodiments disclosed or contemplated herein. It is understood that the various headgear device embodiments disclosed above (including headgears100,120,140,160) or contemplated herein can be incorporated into any system as disclosed or contemplated herein, including either of systems10,30described above. Further, it is understood that the various headgear implementations herein can be operated as described in detail above to tailor the applied fluence to each specific patient to achieve the desired therapeutic fluence via manual adjustments or software. In certain of those embodiments, any of the headgear embodiments herein can have two or more zones of light assemblies that are adjacent to certain zones of the patient's skull, thereby providing for variation in applied fluence between those zones based on physical differences between the skull zones. That is, each of the two or more light assembly zones can be operable separate from the other light assembly zones, thereby allowing for the each zone to be controlled separately with respect to both intensity and duration. In certain specific implementations, there are specific lighting assembly zones that correspond to each of the specific skull zones having different physical characteristics that impact how much light can pass through each of those skull zones. As such, the two or more lighting assembly zones in any headgear embodiment herein allow for control of the duration and/or intensity of the irradiance generated by each of those zones. In certain implementations, the adjustment of the irradiance is made via either (1) manual control in the form of a physical on/off switch (or any other known type of manual control) coupled with each light assembly zone or (2) automated control in the form of either hardware logic or software controlling activation or deactivation of the power to the various light array zones. One exemplary embodiment of automated control via a hardware logic controller is a system having an Arduino Uno™, which is commercially available from Arduino (www.arduino.cc), as the logic controller that is programmed for this control. In those implementations in which a logic controller is used, the system would allow for the physician (or other user) to adjust a control (such as a knob, button, or any other known control) to set the duration and/or intensity for each zone. Alternatively, in those system embodiments having a controller with software, the system would provide for an interface into which the physician (or other user) would be able input the patient's specific cranial measurements and also select the irradiance variable to be adjusted (either duration or intensity). The software would then automatically calculate the appropriate time or intensity for each light assembly zone required by the cranial measurements to achieve the appropriate therapeutic fluence. In either type of system (hardware or software), the controller then controls the application of irradiance individually for each light assembly zone, per control of the relays, so that proper fluence is delivered as determined by the skull thickness measurement and correlated therapy time or intensity. When the elapsed time for a zone is equal to the required time, or the irradiance intensity for that zone is equal to the required intensity, and thus proper fluence is achieved, the controller will trigger the relay to the off mode so that power is shut off for that zone and the therapy delivery is ended. In certain implementations, the various system and headgear embodiments disclosed or contemplated herein can be utilized in a hospital or clinic setting such that the headgear is used repeatedly by different patients. As best shown inFIG.10, one approach to help maintain the cleanliness and hygiene of the interior of the headgear is to provide a cap180for the patient to place on the patient's head prior to donning the headgear device. This cap180, according to this embodiment, can have a cap body182and an elastic band184disposed around the cap body to help secure the cap180to the patient in a fashion and form similar to a shower cap. In one implementation, the cap body182is made of polyethylene terephthalate copolymer (“PETG”), which has a very low absorbance of near infrared wavelengths. The cap180with the PETG body can prevent the headgear device from becoming soiled but also allow maximal transmission of therapeutic energy. Alternatively, the cap180can have a cinchable cord or can take any other known cap form. In a further alternative, the cap body182can be made of any material that has a low absorbance of near infrared wavelengths, thereby allowing for maximum transmission of the therapeutic energy. According to one embodiment, any of the headgear embodiments disclosed or contemplated herein can have the following internal electrical arrangement and connections in order to ensure delivery of electrical power to all of the light assemblies in the light array within the headgear. According to this implementation, the headgear can have a central large-amperage supply line and a large-amperage neutral/ground line both extending along a middle portion of the headgear. Further, the arrangement has relatively smaller wires that extend from the central supply line to the individual light arrays and return to the large neutral line. In certain embodiments, the relatively smaller wires are thinner and more flexible and have a lower amperage than the central supply line. This arrangement reduces the amount of wire supplying the light arrays and thus reduces the space requirements in the headgear or light delivery device, thereby reducing the size requirements thereof. As shown inFIGS.11A-11C, according to one implementation, the internal electrical arrangement200is a flexible machined, extruded, or3D printed structure200with two or more channels running the length of the light delivery device. More specifically, as best shown inFIGS.11A and11B, the specific exemplary embodiment depicted has four channels204formed in the headgear wall202, with an elongate conductor206disposed in each channel204. The elongate conductors206are flexible, or malleable, elongate wires or foils made of conductive material (such as, for example, copper) to carry current from the central supply line to the light assemblies and/or the light sections (as discussed elsewhere herein). The channel walls208separate each channel204, thereby electrically isolating the conductors206. In one embodiment, each conductor206can carry enough power for the light array zone or section it is intended to power, or enough power for the return to the neutral/ground line. In one exemplary implementation, the amperage level for the power delivery conductors206will be up to 20 amps, while the neutral/ground conductors206will be able to carry up to 40 amps. In certain embodiments, the channels204can have screws or other attachment devices (not shown) disposed through the conductors and into the polymer base of the headgear wall202for easy attachment of the wires (not shown) from the light assemblies or arrays thereof. Alternatively, it is understood that spring contacts and solder joints or any other known attachment mechanisms are also potential designs for attachment of the wires. According to various implementations, the number of conductors206(and thus channels204) can vary according to the configuration of the electrical arrangement200and the number of light array sections in the headgear wall202. For example, in one implementation, the arrangement200can have up to nine conductors, with eight of the conductors being power circuits and one of the conductors being a ground. Alternatively, the arrangement200can have any number of conductors. In certain embodiments, the thickness and/or the height of the walls208can depend on the amount of power being transmitted through the conductors206. For example, in various implementations, the ground conductor206can be carrying a lot more power than the power conductors206, and thus the walls208surrounding the channel204containing the ground conductor206can be thicker and/or taller than the walls208surrounding the other channels204. The heights of the walls208are shown at different heights to reflect this possibility. In accordance with the embodiment ofFIG.11C, the headgear wall202has an outer layer (or “cover layer”)210that is disposed over the conductor layer212, which contains the channels204in which the elongate conductors206(as best shown inFIG.11A) are disposed. Further, this specific implementation has three light array sections214A,214B,214C that are physically separated by joints or gaps216defined therebetween. Alternatively, there can be two, four, five, six, or any number of separate light array sections in the headgear wall202, thereby allowing for separate control of each section as described in various embodiments elsewhere herein. FIG.11Ddepicts a different configuration to address the two or more light array sections (such as sections214A,214B,214C as discussed above) in the headgear wall202. That is, in order to ensure electrical connection between the different sections across the joints/gaps (such as joints/gaps216discussed above), a second set of conductors218A is provided in which the conductors218A are disposed within the conductor layer212, rather than disposed within channels on an outer surface of the conductor layer212. Thus, the configuration has the standard conductors208disposed within the channels204, but it also has the inner conductors218A disposed within the conductor layer212as shown, with electrical connections218B coupling the standard conductors208to the inner conductors218A. Thus, for each section joint (such as joint216), an inner (or submerged) conductor218A is provided such that the conductor218A can extend along the headgear wall202to the central line. In accordance with one implementation,FIG.11Edepicts a light delivery device201that is a helmet203having three exemplary light assemblies205disposed therein, along with an electrical arrangement207similar to the arrangement embodiments discussed above. The electrical structure207has a central supply line209A and a central ground line209B extending along a central portion of the helmet203. The light assemblies205are coupled to the central lines209A,209B via the conductive wires211. In one implementation, the conductive wires211are the conductors206discussed above. In this implementation, the conductive wires211are small gauge and flexible. The central lines209A,209B, by comparison, are large gauge and able to carry a larger current, such as for a plurality of light assemblies205. FIG.12shows one embodiment of a light assembly220that can be incorporated into any of the systems or headgear embodiments disclosed or contemplated herein. The assembly220maintains at least one light222(such as an LED light222, for example) at a substantially constant distance from the scalp224of the patient. The assembly220has a tube or other cylindrical structure226that is disposed through an opening in the wall228of the headgear. The tube226has an LED light222disposed therein and a plate or cover230disposed on the proximal end of the tube226. The tube226is substantially transparent to near infrared light, and the light222is positioned at a desired distance A from the bottom of the tube226. The assembly220also has a power supply cable232that is coupled to the light222and extends out of the tube226through an opening in the plate230. In one embodiment, the cable232is attached to plate230such that the cable232does not move with respect to the plate230. The cable232can be flexible or rigid. Alternatively, the length of the cable232within the tube226can either be flexible or rigid, while the length of the cable232disposed outside of the tube226can, independently of the length inside the tube226, be flexible or rigid. According to one embodiment, the assembly220is tensioned such that it is continuously urged toward the patient's scalp224when the assembly220is not in contact with the scalp224. For example, in the specific embodiment ofFIG.12, tension components234(which, in this case are springs234) are coupled to the distal side of the plate230and to the outer surface of the wall228of the headgear such that the tension components234urge the plate230(and thus the assembly220) toward the wall228. In one embodiment, the tension components234compensate for any variations or irregularities in the shape of the patient's skull that result in variations in the distance between the scalp224of the patient and the wall228of the headgear. According to one embodiment, the force of the tension in the tension components234is not so great that the force with which the distal end of the tube226is urged against the scalp224causes any pain to the patient, but it is sufficiently strong to ensure that the distal end of the tube226remains in contact with the scalp224. FIG.13is another embodiment of a light assembly240that can be incorporated into any of the systems or headgear embodiments disclosed or contemplated herein. Any of the components or features of this assembly240that are not expressly discussed herein are substantially the same as the corresponding components in the assembly220embodiment discussed above. In this exemplary implementation, the light assembly240is not disposed through the headgear wall242. Further, this embodiment has a tension component244(which, in this case, is a spring lever244) that is coupled to the plate246and to the inner surface of the wall242of the headgear such that the tension component244urges the plate246(and thus the assembly240) toward the patient's scalp248. In one embodiment, like the previous embodiment ofFIG.12, the tension component244compensates for any variations or irregularities in the shape of the patient's skull that result in variations in the distance between the scalp248of the patient and the wall242of the headgear. According to one embodiment, like the previous embodiment, the force of the tension in the tension component244is not so great that the force with which the distal end of the tube250is urged against the scalp248causes any pain to the patient, but it is sufficiently strong to ensure that the distal end of the tube250remains in contact with the scalp248, thereby ensuring that the light252is disposed at a predetermined distance from the scalp248. FIG.14depicts another embodiment of a light assembly260that can be incorporated into any of the systems or headgear embodiments disclosed or contemplated herein. Any of the components or features of this assembly260that are not expressly discussed herein are substantially the same as the corresponding components in the assembly220,240embodiments discussed above. In this exemplary implementation, a portion of the light assembly260is disposed within an opening or cavity262defined or otherwise formed in the headgear wall264as shown. Further, this embodiment has a tension component266(which, in this case, is a tension spring266) that is coupled to the plate268and to the inner surface270of the opening262of the headgear such that the tension component266urges the plate268(and thus the assembly260) toward the patient's scalp272. In one embodiment, like the previous embodiment, the tension component266compensates for any variations or irregularities in the shape of the patient's skull that result in variations in the distance between the scalp272of the patient and the wall264of the headgear. According to one embodiment, like the previous embodiments, the force of the tension in the tension component266is not so great that the force with which the distal end of the tube274is urged against the scalp272causes any pain to the patient, but it is sufficiently strong to ensure that the distal end of the tube274remains in contact with the scalp272. FIG.15shows a further embodiment of an LED assembly280according to any of the assembly embodiments disclosed or contemplated herein and its use with a light measurement device (in this case, a spectrometer)282within the headgear that can be incorporated into any of the systems or headgear embodiments disclosed or contemplated herein. Any of the components or features of this assembly280that are not expressly discussed herein are substantially the same as the corresponding components in the assembly220,240,260embodiments discussed above. While the plate attached to the top of the tube284and the tension component (which can be any of the tension component embodiments disclosed or contemplated herein) are not shown, it is understood that any plate and/or tension component as disclosed or contemplated herein with respect to any other light assembly embodiment can be incorporated herein. In this embodiment, the spectrometer282is disposed at the distal end of the assembly280and is positioned against the patient's scalp286underneath the patient's hair288. In one embodiment, the spectrometer282is powered by a power supply cable290, and the output of the spectrometer282is transmitted to a human-readable processor or a microprocessor or any known controller by data transmission cable292. According to certain embodiments, an array of these light assemblies280can be incorporated into the system299ofFIG.2, which specifically contemplates the use of a spectrometer40within the light delivery device32as discussed in detail above. Alternatively, instead of the light assembly embodiments as depicted inFIGS.12-15in which the lights are disposed inside of cylindrical structures (“through-hole light assemblies”), other implementations relate to light assemblies having one or more LED lights mounted on a surface of a printed circuit board. One such example is depicted inFIG.16, in which the light assembly300has a cylindrical or tubular structure302, a printed circuit board (“PCB”)304disposed at the distal end306of the structure302, LED lights308disposed on the distal surface of the PCB304, and power supply (and ground) cables310coupled to connectors312that are coupled to the PCB304. In one embodiment, the assembly300can also have a lens314as shown that is coupled to the distal end306of the structure302such that the lens314is disposed between the lights308and the scalp of the patient (not shown). In this specific exemplary embodiment as shown, the light assembly300has three LED lights308. Alternatively, the number of LED lights308on the PCB304can range from one to eight LED lights308. In a further alternative, the PCB304can have any number of LED lights308. The LED lights308can be any known LED lights308. In certain implementations, at least one of the LED lights308on the PCB304can emit light of one wavelength, while at least one other LED light308can emit light of another wavelength. Alternatively, all of the lights308on the PCB304emit light of the same wavelength. In accordance with certain embodiments, the LED lights308are coupled to the PCB304via surface mount pads. Alternatively, the lights308can be coupled to the PCB304in any known fashion using any known mechanism or method. It is understood that the resistors (not shown) that are coupled to and control the voltage and current to the LED lights308are also mounted on the PCB304and also that the traces (not shown) that couple the power supply cables310to the LED lights308are built into the PCB304. In various implementations, the PCB-mounted LED light assembly300has a lower profile in comparison to any of the through-hole light assemblies described above. That is, the lights308mounted on the distal end of the cylindrical structure302allows for the overall length of the cylindrical structure302to be less than the length of the cylindrical structures of the through-hole light assemblies as discussed in detail above. As such, the PCT-mounted LED light assembly embodiments disclosed or contemplated herein (such as the assembly300) allow for a lower profile structure that can result in the wall of the light delivery device in which the light assemblies300are disposed being thinner or requiring less thickness in comparison to any light delivery device containing through-hole light assemblies. In various embodiments, an array of the PCB-mounted LED light assemblies (such as assembly300) will be provided in any of the light delivery device embodiments disclosed or contemplated herein. In certain implementations, the plurality of light assemblies on one light delivery device can include light assemblies of different sizes such that the PCB boards are of different sizes. As such, some of the light assemblies will have larger PCB boards that contain more LED lights (like a PCB board containing eight LED lights, for example), while some of the light assemblies will have smaller PCB boards that contain few LED lights (like a PCB board containing two LED lights). Alternatively, the PCB304on the light assembly300can be a multi-layer board that is segmented into multiple pieces such that the various pieces are somewhat flexible in relation to each other, thereby providing for a PCB304that is conformable to or flexible in relation to the patient's head. The mix of light assemblies of different sizes in the light delivery device and/or the flexibility of the PCBs therein will provide for an inner surface of the light delivery device having either many smaller PCBs or a mix of PCBs of varied sizes such that the resulting configuration will fit around the patient's head more easily than can be accomplished with a smaller number of larger PCBs. As mentioned above, the light assembly300inFIG.16, according to one embodiment, has a lens314incorporated therein. Any of the components or features of this assembly300that are not expressly discussed herein are substantially the same as the corresponding components in the assembly220,240,260,280embodiments discussed above. It is understood that any of the other light assembly embodiments disclosed or contemplated herein, including light assemblies220,240,260,280as discussed above, can have a lens (such as lens314) incorporated therein. The lens314is coupled via a sleeve316to the distal end306of the cylindrical structure302. In one embodiment, the sleeve316is transparent, thereby allowing light from the LED lights308to pass therethrough. Alternatively, any structure, such as arms or any such attachment components, can be incorporated into the assembly300to couple the lens314to the cylindrical structure302. The lens314can help to reduce “hot spots” or uneven distribution of light intensity on the patient's scalp by “evening out” or “smoothing” the application of light across the scalp surface. That is, many lights, including, for example, LEDs, have a relatively small “view angle,” which is the angle from the center of the light at which the intensity of emitted light drops to half its maximal intensity. One example of the view angle (v) is depicted inFIG.18, as discussed below. The result of this small view angle is that the lights of the light assemblies must be positioned relatively close together (in comparison to lights with larger view angles) in order to assure thorough and even application of the light on the patient's scalp. However, the close proximity of the lights can result in “hot spots” of increased intensity on the irradiated surface. The lens314is mounted or otherwise disposed between the light and the irradiated surface (scalp) such that the lens can even out the application of the light, thereby minimizing hotspots and improving the overall evenness of the light distribution across the scalp. In one embodiment, the lens314can be a concave lens or a Fresnel lens. Alternatively, the lens314can be any known lens that can help to smooth out light distribution. In those system/device implementations discussed herein that incorporate a light measurement device, the device can be used to monitor energy delivery at the scalp of the patient (including, for example, through thick hair) to ensure the power generated by the light assemblies is sufficient to ensure adequate irradiance at the cortex. While the various specific embodiments discussed herein include a spectrometer, it is understood that any of these embodiments can have any type of light measurement device, including, for example, a photometer, a luminance meter, an illuminance meter, a spectroradiometer, or a light meter. In addition, according to further embodiments, any of the various system and device embodiments herein can also include feedback-controlled software that functions in conjunction with the light measurement device to monitor the irradiance delivered to the patient's scalp (including, in various embodiments, through the patient's hair) and provide feedback control to ensure sufficient therapeutic fluence is delivered. According to one implementation, the control software runs a control loop by using the light measurement device positioned at the scalp to calculate the fluence for a predetermined period of time. The light measurement device can be placed anywhere along the patient's scalp such that it is between the scalp and the light assembly (or light assemblies). In certain embodiments in which the goal is to adjust the applied fluence to address the patient's hair thickness, the light measurement device is placed specifically in the area of the patient's scalp where the hair is thickest. In accordance with one implementation, the predetermined period of time can range from about a millisecond to about 15 minutes. Alternatively, the period of time can range from about a millisecond to about 10 minutes. In a further implementation, the period of time is a millisecond. In yet another alternative, the predetermined period of time is any relatively short period of time that does not disrupt the method of use as described herein. In use, according to one embodiment, any system disclosed or contemplated herein having the feedback control software can operate in the following fashion. First, the light measurement device is placed in the desired location, and the location on the patient's scalp is entered into the software. Next, the control software is actuated to trigger one or more predetermined light assemblies to radiate light for the predetermined period of time such that the light measurement device collects information about the fluence and transmits that information to the software. The software compares the collected fluence data to the reference (or calibrated) value for fluence (such as the reference fluence for no hair) and calculates the appropriate level of applied fluence to achieve the desired level of therapeutic fluence. At this point, the light measurement device is removed, and the software provides adjusted actuation to the one or more light assemblies to radiate light at the adjusted applied fluence, thereby resulting in the desired applied fluence that generates the desired therapeutic fluence. In certain implementations, the software would also take into account the amount of fluence applied during the measurement period and adjust the timing and/or power of the therapy cycle accordingly. FIG.17depicts, according to one embodiment, the relative position of multiple assemblies of at least one light320with respect to each other, and with respect to the wearer's scalp322. The lights324of a characteristic radiant intensity R also have a characteristic viewing angle V when viewed from directly below the light source324when it is pointed directly down. The viewing angle V is the angle from directly below the light324at which the irradiance drops to 50% of its peak value. The lights324are positioned a distance x D1from the scalp322and a distance d D2apart from each other. The incident angle a I is the angle from directly axial below the LED from which the LED is viewed. The irradiance on the patient's scalp322resulting from the lights when pointed toward the scalp322may be characterized according to the following equation 1: Irradiance,I=R1+x2⁢cos⁢av(1) where a, v, R and x are described previously. As can be seen fromFIG.18, the resultant irradiance from a light324varies on the scalp322with position and angle with respect to the light324. An examination of equation 1 will make it clear there are many different combinations of characteristic LED light radiant intensity, separation from the scalp and viewing angle that can be used to obtain a desired irradiance at the skin. Table 1 has examples of such combinations, all at incidence angle a set to zero, thus applicable to any viewing angle. TABLE 1Irradiance,Radiant Intensity,Separation fromI (mW/sq cm)R (mW)skin, x (cm)15151.00015301.414157.50.70730301.00030601.41430150.707 The relative position of the light assemblies with respect to each other are determined by the characteristic viewing angle of the lights. The relationship of the separation d between light assemblies to the viewing angle of the light is shown in equation 2. d=2xtanv(2) It is again clear upon examination that there are many different combinations of viewing angle v and LED/scalp separation x that can be used to obtain a desired separation between the light assemblies. Examples of various combinations are shown in Table 2. TABLE 2Separation betweenSeparation fromViewing Angle,LEDs, d (cm)skin, x (cm)v (radians)2.001.0000.7852.001.4140.6152.000.7070.9553.000.5001.2493.000.2501.4063.002.0000.643 Depicted inFIG.18is a graphic representation of irradiance at the patient's scalp from lights of radiant intensity R340for a given combination of parameters separated from each other by distance d D3and separated from the scalp by a distance x D4and with characteristic viewing angle v VA of the light. Also shown is the variation in the irradiance at the scalp with respect to incident angle a IA across a distance of the scalp. In certain implementations, the various systems and/or devices disclosed or contemplated herein can include integrated safety features to prevent misuse and/or injury. For example, in one embodiment, any system embodiment herein can have control software or hardware components that prevent use of the system/device for longer than a maximum use time that is set by a healthcare provider. For example, the maximum use time in one embodiment can be one hour. Alternatively, the control software or hardware components can prevent use of the system/device more than a maximum number of uses over a predetermined period of time. For example, the maximum number of uses can be two uses over 48 hours. In a further alternative, the control software or hardware components can provide both a maximum use time and a maximum number of uses over a predetermined period of time. According to one exemplary embodiment, the system controller can have a counter/timer that would track the amount of time that the patient is exposed to the therapeutic energy such that the controller can shut down the light arrays when the maximum time period has been reached. Further, the controller can also track the number of uses over any predetermined time period and can prevent activation of the light arrays for the remainder of the time period after the maximum number of allowed uses has been reached. It is understood that the various parameters for these safety control features can be inputted by the physician or other healthcare provider prior to use by the patient. That is, the appropriate limits can be decided by the physician, and then the physician or other healthcare provider can input those limits into the system via the interface. In certain implementations, the controller would also provide a locking mechanism, such as a passcode or other such mechanism, to prevent the patient from adjusting the safety features. It is understood that the various system embodiments having the safety features as described in detail above will have to have uninterrupted power even when the system is not in use such that the controller can continue to track passage of time and the usage of the system as described above. In one embodiment, the power source can be a battery or alternatively can be electricity delivered from an outlet. In a further implementation, the power source can be any known power source that provides uninterrupted power. In accordance with certain specific implementations, the various systems herein can also have a communication component. That is, the controller in the system can be coupled to a communication transmission mechanism such that the controller can transmit messages to a phone, a computer, or any other type of communication device via text message, e-mail, or any other form of communication. Alternatively, the communication can be an alert that is provided by the system or device itself in the form of a visual or audible alert. In one embodiment, the controller can transmit messages or alerts to the patient to notify the patient that the use of the system has exceeded the safety limitations in period of use, number of uses, or some other parameter. According to a further embodiment, the controller can transmit messages or alerts to a healthcare provider notifying the provider that the safety parameters have been exceeded. In a further embodiment, the controller can transmit messages or alerts to the patient to remind the patient (or to the healthcare provider) that it is time to use the system again. In use, according to one embodiment, a patient can use the system and have energy applied to the patient's skull for the first prescribed time period at the prescribed applied fluence levels. When the patient next attempts to use the system for her next therapy, the controller compares the current date and time to the date and time of the prior therapy. If the elapsed period is equal to or greater than the predetermined time period, the controller will activate the power to the light assemblies. On the other hand, if the elapsed period is less than the predetermined safety time period, the controller will not allow activation of the light assemblies. In an alternative embodiment in which the system is used in a group setting, such as a clinic or hospital, for example, the same system may be used by multiple patients. As such, safety control mechanisms can be incorporated into the system that are configured to address the usage by more than one patient. More specifically, the system will have software associated with the controller that requires each patient that uses the system to have a unique identifier that must be provided to the controller by some mechanism. In one embodiment, the unique identifier can be a password, a barcode, a keyfob, or any other known unique identifier that can be used such that the system can identify each individual patient. As such, it is understood that the system can have any type of input mechanism to allow for input of that unique identifier depending on the type of identifier. After entry of the unique identifier, the controller operates in a fashion similar to that described above for general operation, except that the parameters and tracking information are patient specific. That is, the control software tracks and stores the date and time of use indexed for each patient and then compares the date and time of Patient A's current use to the date and time of Patient A's prior use. As in general operation, if the elapsed time since Patient A's prior use is less than the predetermined period, the controller will not activate the system, etc. Although the present invention has been described with reference to preferred embodiments, persons skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention.
55,137
11857802
DETAILED DESCRIPTION Intra-oral devices for protecting head and neck tissues during radiation treatment, and method(s) for using such various embodiments of such devices, are described herein.FIGS.1-10illustrate various embodiments of the entirety of the intra-oral device with the remaining figures showing various improved aspects and features of the device. FIG.1shows, during fitting of a patient for use, a partial cross-sectional view of an embodiment for an intra-oral device for protecting oral tissues during radiation treatment, showing upper and lower dental arch members, a tongue-deviating paddle, and showing a midline rod in location for moving the tongue-deviating paddle for suitable tongue position adjustment, whether by in/out, up/down, or left/right movement of the tongue-deviating paddle, as further described herein. InFIG.1, an intra-oral device100is shown being fitted to a patient P for movement of the patient's tongue T, which may be envisaged from the illustration to be, in an embodiment, to the patient's right. FIG.2shows upper and lower dental arch members, a left tongue-deviating paddle, and a midline rod in location for moving the left tongue-deviating paddle for suitable tongue position adjustment, whether by in/out, up/down, or left/right movement of the left tongue-deviating paddle, and the use of a posterior stabilizing rod, all as further described herein below. InFIG.2, a substantially mirror image configuration to intra-oral device100just noted above is provided in the form of an intra-oral device200; the intra-oral device200is configured for movement of a patient P's tongue T to the left. FIG.3is a rearward perspective view of an embodiment for an intra-oral device, showing upper and lower dental arch members, a tongue-depressing paddle, and showing a midline rod in location for moving the tongue-depressing paddle for suitable tongue position adjustment, whether by in/out, up/down, clockwise/counterclockwise rolling movement, or left/right movement of the tongue-depressing paddle, and the use of first and second posterior stabilizing rods, each of which may, in an embodiment, be moved in/out, up/down, or left/right, or rolled for suitable positioning of the tongue-depressing paddle. InFIG.3, an intra-oral device300is depicted which is configured for guiding movement of a patient P's tongue T downward. FIG.4is an exploded perspective view of an embodiment for an intra-oral device ofFIG.3, showing upper and lower dental arch members which are joined to form a dental arch assembly, a tongue-depressing paddle, and showing a midline rod in location for assembly through an adjustable guide portion (that is, disposed so as to tie parts together in a manner that allows motion between the dental arch assembly and the tongue-depressing paddle) and attachment of the midline rod to the tongue-depressing paddle for suitable tongue position adjustment, whether by in/out, up/down, or left/right movement of the tongue-depressing paddle, and the provision of first and second posterior stabilizing rods in location for assembly in each case through an adjustable guide portion and attachment to a left side and a right side of the tongue-depressing paddle, respectively. FIG.5is a partially assembled exploded perspective view of an embodiment for an intra-oral device, similar to that just shown inFIG.4, showing upper and lower dental arch members which are joined to form a dental arch assembly, and now showing a tongue-depressing paddle in an initial position with first and second posterior stabilizing rods in location assembled through an adjustable guide portion and attached to a left side and a right side of the tongue-depressing paddle, respectively, and also showing a midline rod attachment of the midline rod yet to be inserted through an adjustable guide portion and affixed to a tongue-depressing paddle for suitable tongue position adjustment in a patient. FIG.6is a perspective view taken looking down and from the rear at an embodiment of a fully assembled intra-oral device, similar to that just shown inFIGS.4and5, showing upper and lower dental arch members which are joined to form a dental arch assembly, and now showing a tongue-depressing paddle in an initial position with first and second posterior stabilizing rods in location assembled each through an adjustable guide portion and attached to a left side and to a right side of a tongue-depressing paddle, respectively, and also showing a midline rod attachment of the midline rod inserted through an adjustable guide portion and affixed to a tongue-depressing paddle for suitable tongue position adjustment in a patient. FIG.7is a side view taken looking from the rear at an embodiment of a fully assembled intra-oral device, similar to that just shown inFIGS.4,5, and6, here showing upper and lower dental arch members which are joined to form a dental arch assembly, and now showing a tongue-depressing paddle in an initial position with first and second posterior stabilizing rods in location assembled each through an adjustable guide portion and attached to a left side and to a right side of a tongue-depressing paddle, respectively, and also showing a midline rod attachment of the midline rod inserted through an adjustable guide portion and affixed to a tongue-depressing paddle for suitable tongue position adjustment in a patient, which in this view with the posterior stabilizing rods in a dihedral configuration wherein they slope downwardly from their respective adjustable guide portions at the dental arch assembly to their respective attachment points to the tongue-depressing paddle. FIG.8is a partially exploded perspective view of an embodiment for an intra-oral device, showing major components for attachment of a left tongue-displacing paddle to a dental arch assembly, illustrating the use of first or right side posterior stabilizing rod in location assembled through an adjustable guide portion and attached to the right side of the left tongue-displacing paddle, and also showing a midline rod attached to a front rod receiving receptacle in the left tongue-displacing paddle, and configured for insertion of the midline rod through an adjustable guide portion in the dental arch assembly. FIG.9is a perspective view of an embodiment for an intra-oral device, showing major components used in the attachment of a left tongue-displacing paddle to a dental arch assembly, illustrating the use of first or right side posterior stabilizing rod in location assembled through an adjustable guide portion and attached to the right side of the left tongue-displacing paddle, and also showing a midline rod attached to a front rod receiving receptacle in the left tongue-displacing paddle, and wherein the midline rod has been inserted through an adjustable guide portion in the dental arch assembly, and also noting a location in broken lines where the midline rod and posterior stabilizing rod may be shortened as desirable for convenient repeated use and/or for storage of the intra-oral device. FIG.10is a perspective view of an embodiment for an intra-oral device similar to that just shown inFIG.9, looking down at the intra-oral device, now showing major components used in the attachment of a left tongue-displacing paddle to a dental arch assembly, illustrating the use of first or right side posterior stabilizing rod in location assembled through an adjustable guide portion and attached to the right side of the left tongue-displacing paddle, and also showing a midline rod attached to a front rod receiving receptacle in the left tongue-displacing paddle, and wherein the midline rod has been inserted through an adjustable guide portion in the dental arch assembly, and also illustrating that for a given patient, the midline rod and posterior stabilizing rod each can be shortened for convenient repeated use and/or for storage of the intra-oral device. FIG.11is a perspective view of an embodiment for an adjustable guide portion that may be used in an embodiment for an intra-oral device, a ball-type joint with range of motion to a limit of the freedom of movement of the joint (here the outer broken line circle), as, for example, may be provided in a spherical bearing within a housing and, in an embodiment further including a sleeve with fastener for securing a rod passing through it which provides freedom to alter the selected location for a rod as regards translation along the rod's longitudinal axis (i.e. in-out movement along the rod), and further illustrating a rod located in the adjustable guide (either during an adjustment or fitting phase, or after being secured, longitudinally) may be adjusted in a pitch axis motion (tongue deviating paddle moves up/down), or along a yaw axis motion (tongue deviating paddle moves left/right), or along a roll axis motion (top and bottom of the rod, and of the tongue deviating paddle, move in opposite directions as they are rolled). Such intra-oral devices100, or200, or300, or similar embodiments using the teachings hereof, may be useful in the reduction of damage to non-cancerous tissues of an oral cancer patient during radiation treatment. The intra-oral devices100or200provide structures such as tongue deviation paddle32R (for movement of a patient's tongue to the right), or tongue deviation paddle32L (for movement of a patient's tongue to the left) either of which can move a patient's tongue out of the way of a radiation beam during treatment. The intra-oral device300may include a tongue-depressing paddle40which may be used to depress a patient P's tongue T out of the way of a radiation beam during treatment. Structures such as tongue-deviation paddles32L or32R (which generally may be referenced herein without regard to “handedness” as tongue-deviation paddle32) or a tongue-depressing paddle40, may also hold a patient's tongue T or other adjacent tissues steady in a repeatable position so that a multi-dose radiation beam can be better targeted. Use of such intra-oral devices minimizes exposure and resulting damage from radiation to adjacent non-cancerous tissues. In various embodiments, and in methods of use thereof, intra-oral devices100, or200, or300, or other embodiments and configurations described herein, may be configured to protect a patient's healthy tongue from the negative effects of radiation. In other embodiments, and methods of use thereof, intra-oral devices100, or200, or300, or other embodiments and configurations made possible by the descriptions herein, may be configured to stabilize a cancerous lesion on a patient's tongue so that a radiation beam will have a relatively fixed, stable target volume during radiation treatment. As just mentioned above, one component useful in an embodiment of intra-oral devices100or200which provides tongue-deviating functionality is a tongue-deviating paddle32R or32L. Such a tongue-deviating paddle32R or32L may be disposed in a roughly vertical configuration, such as depicted inFIGS.1and2. However, note that the tongue-deviating paddles need not be oriented roughly vertically, and may be rotated to any desired angle. As noted inFIG.1, and elsewhere herein, a tongue-deviating paddle noted with reference numeral32R may be configured for urging a tongue to a patient's right. For example, as noted inFIG.2and elsewhere herein, a tongue-deviating paddle noted with reference numeral32L may be configured for urging a tongue to a patient's left. In an embodiment, one of the uses of a tongue-deviating paddle, whether32R or32L, is to move a patient's tongue away from a side of the patient's mouth that has a cancer to be treated by radiation. Various embodiments for an intra-oral device100or200may be configured such that the position of a tongue-deviating paddle32R or32L may be adjusted to allow for customizing the device100or200to the particular size and shape of the mouth and tongue of a particular patient. Additionally, in an embodiment of an intra-oral device100or200, a tongue-deviating paddle32R or32L may alternately be disposed substantially horizontally in order to depress, raise, or otherwise stabilize the patient's tongue. In such manner, the positioning of tongue-deviating paddle32R or32L may optionally also include support of a tongue toward one side or another of a patient's mouth. Thus, in an embodiment for an intra-oral device100or200, a tongue-deviating paddle32R or32L may be disposed substantially horizontally, to upwardly lift a tongue, or to downwardly depress a tongue, in order to hold a patient's tongue away from a cancerous zone in the patient's mouth, so as to minimize or prevent exposure of a patient's tongue or other tissues to radiation. Intra-oral devices100,200,300, or alternate embodiments according to the teachings hereof, may be provided in, or assembled from kits400as noted inFIG.14, having components of various predetermined sizes. For example, small sized components, with small upper10and lower12dental arch members in a dental arch assembly60[FIG.8], with small tongue-deviating paddles32L and32R and a small tongue-depressing paddle40may be provided. In another embodiment, medium sized components may be provided, with medium sized upper10and lower12dental arch members in a dental arch assembly60, and medium sized tongue-deviating and/or medium size tongue-depressing paddles. In other embodiments, large sized components may be provided, having large upper10and lower12dental arch members in a dental arch assembly60, and large tongue-deviating32and/or tongue-depressing paddles40, all of which may be adapted for different sizes and shapes of mouths encountered in various patients P to be treated. Yet further, embodiments may be configured in a mix-match combination, mixing large, medium, or small components, to accommodate unusual mouth sizes or treatment environments as may be encountered in various patients. As depicted inFIG.14, a kit400for fabrication of an intra-oral device100, or200, or300may include various components, and selected components may be assembled into an embodiment having primarily tongue-deviating functionality, or into an embodiment having primarily tongue-depressing functionality. Attention is directed toFIG.3, which shows an intra-oral device300having a tongue-depressing function, using a tongue-depressing paddle40. The purpose of the tongue-depressing paddle40is to move a patient's tongue down, for example during treatment of a maxillary cancer, or to stabilize a patient's tongue in a secure position, for example during treatment of a mandibular or tongue cancer. The intra-oral device300may be configured such that the position of a tongue-depressing paddle40may be adjusted to allow for customizing to the particular size and shape of the mouth, and size and shape of a tongue, of a particular patient. In various embodiments, an upper dental arch member10(or an improved upper dental arch110as shown inFIGS.20A and20B) may be separately provided. In a kit400(e.g., seeFIG.14), a selection of upper dental arch members10may be provided in preselected sizes, having a configuration complementary in size and shape to that of maxillary dental arch dimensions found in a selected group of anticipated patients. In an embodiment, an upper dental arch member10may be provided in a generally U-shaped (e.g., horseshoe shaped) configuration. An upper dental arch member10may be provided in various sizes, such as small, medium, large, or other sizes. As may be appreciated fromFIGS.2,3,4and13A or13C, in an embodiment, an upper dental arch member10may have an upper side10U. An upwardly directed upper receiving trough11may be disposed on or in the upper side10U. As seen inFIG.13A, the upper receiving trough11may be adapted to receive a fill-in material11F, which fill-in material11F may be molded to customize the fit of the upper dental arch member10to an individual patient's maxillary teeth50(seeFIG.1) or edentulous maxillary arch. As seen inFIG.13C, the upper receiving trough is shallower along an inner portion of the arch tray so as to form a wedge-shaped profile. A narrower (lower profile) inner portion of the stent (both the arch tray reduction and the insert profile reduction) assists with inserting the stent into the patient's mouth when the mouth opening is limited due to trismus, surgery or other reason. In a preferred embodiment, the inner portion of the insert is 160 mil lower in profile. The insert portion can also be thinner on the outer portion of the stent (by some 100 mil) to create a lower profile. The profile can be wedge-shaped, or stepped, or beveled so long as the inner wall portion is shallower than the outer portion. This combined with the inner portion profile reduction makes the stent easier to insert and form in a patient's mouth. Another key benefit of the different insert shapes is that the jaw opening as a result of the different profile thicknesses will mean that the medium stent (or larger of the two) opens the mouth approximately 2 cm, while the smaller design opens the jaw 1.5-1.7 cm. This is an additional feature. The upper receiving trough may be variously adapted to receive a moldable compound for fabrication into a bite pad for secure receipt of the occlusal surface of a patient's maxillary teeth or edentulous arch. Optionally, as seen inFIG.13A, in order to assist in the retention of the fill-in material11F in the upper receiving trough11, an inwardly directed wedge111and/or an outwardly shaped wedge110may be provided, such as by way of post molding machining of upper dental arch10, or by using multipart fabrication techniques. Alternately, the fill-in material11F may be retained as by using an array of keyways or apertures formed through the base or sidewalls of the trough11as shown in the embodiment ofFIGS.20A and20B. In various embodiments, a lower dental arch member12(or an improved lower dental arch112as shown inFIGS.20A and20B) may be provided. In a kit400(seeFIG.14), a selection of various sizes for a lower dental arch member12may be provided in a configuration complementary in size and shape to that of mandibular dental arch dimensions expected to be found in an anticipated patient population. In an embodiment, a lower dental arch member12may be provided in a generally U-shaped (e.g., horseshoe shaped) configuration. A lower dental arch member12may be provided in various sizes, such as small, medium, large, or other sizes. As may be appreciated fromFIGS.1,7, and13B, in an embodiment, a lower dental arch member12may have a lower side12L on or in which a downwardly directed lower receiving trough13is provided. The downwardly directed receiving trough13may be adapted to receive a fill-in material12F, which fill-in material12F may be molded to customize the fit of the lower dental arch member12to an individual patient's mandibular teeth52(seeFIG.1) or edentulous mandibular arch. As seen inFIG.13D, the lower receiving trough is shallower along an inner portion of the arch tray so as to form a wedge-shaped profile. A narrower (lower profile) inner portion of the stent (both the arch tray reduction and the insert profile reduction) assist with inserting the stent into the patient's mouth when mouth opening is limited due to trismus, surgery or other reason. In a preferred embodiment, the inner portion of the insert is 160 mil lower in profile. The insert portion can also be narrower on the outer portion of the stent (by some 100 mil) to create a lower profile. This combined with the inner portion profile reduction makes the stent easier to insert and form in a patient's mouth. Another key benefit of the different insert shapes is that the jaw opening as a result of the different profile thicknesses will mean that the medium stent (or larger of the two) opens the mouth approximately 2 cm, while the smaller design opens the jaw 1.5-1.7 cm. This is an additional feature. The advantage of the insert extending to sit on the walls of the arch tray (as withFIG.13C) is to accommodate a wide range of jaw sizes. Without this design change, the insert that seats inside the walls of the arch tray, creates a greater likelihood that a smaller or larger mouth might have teeth rest on the arch tray lip/wall rather than the insert material. The lower receiving trough13may be variously adapted to receive a moldable compound for fabrication into a bite pad for secure receipt of the occlusal surface of a patient's mandibular teeth or edentulous arch. Optionally, as seen inFIG.13B, in order to assist in the retention of the fill-in material12F in the lower receiving trough13, an inwardly directed wedge121and/or an outwardly shaped wedge120may be provided, such as by way of post molding machining of lower dental arch12, or by using multipart fabrication techniques. Alternately, the fill-in material11F may be retained as by using an array of keyways or apertures formed through the base or sidewalls of the trough11as shown in the embodiment ofFIGS.21A and21B. In various embodiments, the upper dental arch member10and the lower dental arch member12are joined together to provide a dental arch assembly60. As seen inFIG.2, in an embodiment, the upper dental arch member10may be connected to the lower dental arch member12by an anterior strut26. In an embodiment, posterior struts14and16may connect the upper dental arch member10and lower dental arch member12. As noted inFIG.2, in an embodiment, upper dental arch member10may include a first end10A and a second end10B. In an embodiment, the lower dental arch member12may include a third end12C and a fourth end12D. The upper dental arch member10and the lower dental arch member12may be joined to each other at, near, or adjacent their respective posterior aspect (that is, the open end of their “U” shape). On one side, at, adjacent, or near first end10A and third end12C, upper member10and lower member12may be joined together. On an opposing side, at, adjacent, or near second end10B and fourth end12D, upper member10and lower member12may be joined together. In an embodiment, as seen inFIGS.2-9, struts14and16may allow anterior-posterior placement at selected locations between the upper dental arch member10and the lower dental arch member12. Thus, the upper dental arch member10and the lower dental arch member12may be moved forward F or rearward R with respect to each other as noted by reference arrows inFIG.1. In an embodiment, provision of adjustable struts14and16(for example, sliding or hinged components for attachment to one or both of the upper dental arch member10and lower dental arch member12) may allow adjustment for a fabricating a dental arch assembly with regard to the amount of interincisal opening. As seen inFIGS.2and3, an embodiment, the struts14and16may include therein an adjustable guide G, which may be provided in the form of a spherical bearing or ball joint19and21, respectively. Such ball joints19and21may have therein a through joint aperture such as a slot or hole defined by internal sidewall18or20, respectively, which allow a tongue-deviating paddle32or tongue-depressing paddle40to be adjusted in a medial and/or lateral direction (that is, in a front to back or in a side to side fashion), an in various embodiments, in up and down directions as well. In an embodiment, at time of fabrication the struts14and16may be adjustable so as to allow the fabricator to conform the upper dental arch member10and lower dental arch member12to a patient's jaw and/or tongue shapes, or treatment objectives. In such embodiment, at time of fabrication, the struts14and/or16may be moved forward or backward, so as to configure the upper dental arch member at a suitable location relative to the lower dental arch member. Attention is now directed toFIG.3, where posterior stabilizing rods22and24are shown. Rods22and24provide structural connector, for example between dental arch60and tongue-deviating paddle32or tongue-depressing paddle40. Rods22and24may be sized and shaped to be inserted through the through joint aperture such as slots or holes defined by sidewalls18and/or20in the ball joints19and/or21of struts14and/or16. As seen inFIG.3, posterior stabilizing rods22and24may be utilized to locate and secure a tongue-depressing paddle40. Alternately, as depicted inFIG.2, a posterior stabilizing rod such as rod22may be utilized to locate and secure a tongue-deviating paddle32. As seen inFIG.2, at the anterior aspect, that is, at the front38of the device, an anterior strut26may be attached to join the upper member10and the lower member12in such a way that the anterior strut26may be provided of selected height26H between a lower side26L placed at lower dental arch member12, and an upper side26U placed at upper dental arch member10(seeFIG.14), and thus can adjustment may be tolerated with respect to differences in the interincisal distance in various patients. An anterior strut26may also provide a housing25for a rotating spherical bearing or ball joint27that may have a running through it a guide hole or slot defined by interior sidewalls28. The anterior strut26may be configured to support and serve as an attachment point as regards the anterior/posterior location of a tongue-deviating paddle32or a tongue-depressing paddle40. The anterior strut26may also be configured to support and serve as an X, Y and Z axis placement locator for a tongue-deviating paddle32or a tongue-depressing paddle40. For example, seeFIG.11, wherein a range of motion limit R for an exemplary adjustable guide G such as ball joint27in strut26is illustrated (functionality may be similar for adjustable guides G in struts14and16). In an embodiment, a connector portion CP is sized and shaped for adjustable engagement with the adjustable guide G. In an embodiment, such adjustable guides G may allow adjustment along one or more of (a) a pitch axis80, (b) roll axis82, (c) yaw axis84, and (d) a linear axis86. As shown inFIG.2, a midline rod30can be sized and shaped to be complementary to a slot or hole defined by sidewalls28in ball joint27of anterior strut26, to connect, locate, and secure a tongue deviating paddle32. Likewise, as seen inFIG.3, a midline rod30may be provided sized and shaped complementary to through joint aperture slot or hole defined by sidewalls28in ball joint27of anterior strut26, to connect, locate, and secure a tongue-depressing paddle40. In an embodiment, a tongue deviating paddle32may be provided in generally oval or tear-drop shaped configuration. However, any convenient figuration may be utilized, and the device shall in no way be considered to limited structures and uses to such shapes as may be suggested for an embodiment In an embodiment, a tongue-deviating paddle32, or a tongue-depressing paddle40, may be provided with a midline rod30A that will fit through the anterior strut26and protrude outward from the front of the dental arch assembly60for control during the fitting and placement stage. As seen inFIG.2, a tongue-deviating paddle32may have a working surface41that at least in part has a concave surface toward a patient's tongue T (seeFIG.1, not shown inFIG.2.) The tongue-deviating paddle may also have a convex surface43on a non-working side, that is, a side away from a patient's tongue. Affixed to, or provided as a part of a tongue-deviating paddle32, a mount34may be provided for securing thereto a posterior stabilizing rod22. Such posterior stabilizing rod may, in an embodiment, be sized and shaped to fit through the appropriate guides in the form of through joint aperture slots or holes defined by sidewalls18(alternately, guides defined in the form of through joint aperture slots or holes defined by sidewalls20) in one of one or more posterior struts, which are here shown as struts14and16, in order to locate and secure tongue-deviating paddle32, so as to hold the tongue toward the contra lateral (opposite) side. The tongue-deviating paddles32may take different forms and sizes (e.g., small, medium, or large) and be configured to deviate to the right or to deviate to the left of a patient, depending on the side of a patient's mouth where their cancer that requires treatment is located. A tongue-deviating paddle32may be provided in a generally oval shaped or tear-drop-shaped tongue protection element PE having a mount34configured to receive at a first end221or241of one of the posterior stabilizing rods22or24, respectively. In an embodiment, as may be appreciated by reference toFIG.15A, mount34may be provided in the form of a seat formed in the protective element PE (e.g. tongue-deviating paddle32) containing a spherical or ball type joint wherein ball34B has a rod-receiving partial aperture defined by interior sidewalls wall34C and interior end wall34E. In an embodiment, a protective element PE such as a tongue-deviating paddle32may further include a housing36that is sized and shape to receive a midline rod30A. In an embodiment, the housing36may be defined by interior sidewalls36C and by an end wall36E, as noted inFIG.14. In an embodiment, a housing36may be mounted at or near an anterior end36A of a tongue-deviating paddle32, and configured to receive a first end30A1of midline rod30A. As seen inFIGS.3and7, a tongue-depressing paddle40may be provided with a tongue protection element41. Such tongue protection element41may be provided in a generally oval-shape, or with a rounded triangular shape, or with a trapezoidal-shape, as suitable in particular circumstances. In an embodiment, joints42and/or44may be provided, and mounted on a first side43of the paddle40. The joints42and/or44may be provided as ball mount joints, in that balls42B and44B, respectively, are provided with spherical freedom of movement in joints42and/or44. In an embodiment, as noted in a cross-sectional view provided inFIGS.15B and15C, balls42B and44B may be provided with rod-receiving partial apertures defined by interior sidewalls walls42C and44C, and interior end walls42E and44E, respectively. The rod-receiving partial apertures defined by the just mentioned features are configured to receive and seat the posterior stabilizing rods22and24, and more particularly a first end221or241of such rods, as noted in broken lines inFIGS.15B and15C. In an embodiment, the tongue-depressing paddle40may be provided with a housing46configured to receive the midline rod30, which in an embodiment may be of the same configuration as described above as regards housing36. Embodiments of the intra-oral device200will now be further described with reference toFIGS.2,3and4.FIG.2illustrates an embodiment of an intra-oral device200configured with a tongue-deviating paddle32. However, it must be understood that the tongue-deviating paddle32components and connecting parts as shown configured inFIG.2are interchangeable, and thus may be replaced by similar elements of different sizes, or of either right hand or of left hand configuration, and such alternate configuration will provide the tissue positioning functionality as described herein. As assembled, the intra-oral device200includes two elongate, essentially U-shaped members, namely an upper member10and a lower member12. The upper member10and the lower member12are shown coupled together by struts15and16. As seen inFIG.12, in another embodiment, an upper member10may be coupled to a lower member12using a moveable joint417. Alternatively, upper member10and lower member12may be directly and fixedly attached together, as generally shown inFIGS.2,3,6and8, for example. FIG.13Ais a cross section of an upper dental arch member with molded fill-in material, taken as at line13A-13A ofFIG.4, which configuration may be provided in an embodiment for an intra-oral device, showing an upper receiving trough (which may include slots or holes as shown in the improved dental arch embodiment shown inFIGS.20A and20B) in the upper dental arch member which may be filled with a fill-in material molded to fit a particular patient's teeth or edentulous arch(es).FIG.13Aillustrates a cross-section taken at line13A-13A ofFIG.4of an upper dental arch member10. As shown inFIG.13A, the upper dental arch member10may be provided with an upper receiving trough11, which in an embodiment may generally be U-shaped. As noted above, the upper receiving trough11may be adapted to be filled with fill-in material11F. The fill-in material11F may be provided in the form of a moldable plastic or similar moldable material that may be cured once molded. Molded material may be provided responsive to the size and shape10M of a patient's maxillary teeth50, or edentulous arch, as suggested by illustrations provided inFIGS.1,4, and13A. FIG.13Bis a cross section of a lower dental arch member, taken as at line13B-13B ofFIG.4, which configuration may be provided in an embodiment for an intra-oral device, showing a lower receiving trough (which may include slots or holes as shown in the improved dental arch embodiment shown inFIGS.21A and21B) in the lower dental arch member which may be filled with a fill-in material molded to fit a particular patient's teeth or edentulous arch(es).FIG.13Billustrates a cross-section taken at line13B-13B ofFIG.4of a lower dental arch member12. As shown inFIG.13B, the lower dental arch member12may be provided with a lower receiving trough13, which in an embodiment may generally be U-shaped. As noted above, the lower receiving trough13may be adapted to be filled with fill-in material12F. The fill-in material12F may be provided in the form of a moldable plastic or similar moldable material that may be cured once molded. A mold may be provided responsive to the size and shape of a patient's mandibular teeth52, or edentulous arch, as suggested by illustrations provided inFIGS.1,4, and13B. When assembled an intra-oral device200may include an upper dental arch member10having upper molded surface10M and a lower dental arch member12with lower molded surface12M that as joined together, such as by struts16provide a dental arch assembly60which acts as an intermaxillary scaffold. The dental arch assembly60thus holds a patient's maxillary teeth/arch50and mandibular teeth/arch52(see, for example, the position of patient's teeth50and52and angle alpha α inFIG.1) apart in a repeatable position in three-dimensional space, at a selected angle alpha α of opening, and, as may be possible with suitable patient tolerance consistent with medical objectives, at suitable forward or rearward positioning of the mandibular teeth/arch52in relation to the position of the maxillary teeth/arch50. In an embodiment, a dental arch assembly60should be considered to be an intra-oral device, even without the use of a protective element such as tongue deviating paddles32or tongue-depressing paddles40or the like. In any event, when a desired or prescribed opening position of a patient's mouth is achieved with the intermaxillary dental arch assembly60portion of the intra-oral device,200, the device then additionally is used to provide supports for a protective element such as a “tongue paddle”—that is a tongue-deviating paddle32R,32L, or tongue-depressing paddle40—and thus a selected paddle then displaces a patient's tongue in a prescribed direction and position. After a dental arch assembly60is constructed, a protective element PE including protective portions PP (seeFIG.8) such as tongue paddle32R or32L may be inserted into a patient's mouth and loosely attached by way of an adjustable guide to the dental arch assembly60. Using a posterior stabilizing rod22as a stabilizing device, rotating the tongue-deviating paddle32L or32R using midline rod30A and the adjustable guide27as a fulcrum point, a patient's tongue may be positioned to a desired location. In an embodiment, once the patient's tongue T is in a suitable location, the anterior strut26and posterior strut14(or16, as applicable) may be fixed to midline rod30A and posterior stabilizing rod22. In an embodiment, a locking mechanism70(e.g. compression fitting) may be utilized to fix in place any one or more of posterior stabilizing rods22or24, or midline rods30, or30A (each of which is more fully described elsewhere herein). In an embodiment, fixation into a secure working position may be accomplished using a bonding agent, such as a curable bonding agent known in the field, such as light-cured acrylic, or by other methods such as by fusing the components with cyanoacrylate compositions or similar bonding agents. In any event, the objective is to assemble an intra-oral device, for example device100,200, or300, into a secure configuration, and to lock the protective element such as a tongue-deviating paddle (e.g.,32L or32R) or a tongue-depressing paddle40into a final, secure position. In various embodiments, such locking mechanisms may be irreversible (e.g., cyanoacrylate fusion) or reversible (mechanical locking mechanism70). One example of an improved mechanical locking mechanism is shown inFIGS.16A-16G. The improved locking mechanism uses an anterior strut126having a ball joint134at one end and a threaded rod122extending therefrom. The threaded rod may have a length that is partially threaded as with the embodiment shown inFIGS.16A-16E, or may be fully threaded along its length as with the rod122shown inFIGS.19A and19B. The threaded rod122may have four slots130spaced at 90 degrees about the barrel of the rod and formed at least partially along the length of the body of rod and such slots130allow the rod122to compress when fitted within a complementary gimbal to affect a greater friction fit within the gimbal as described further below. Referring back toFIG.2, the intra-oral device200may include an oval-shaped or tear-drop shaped tongue protective element, e.g., tongue-deviating paddle32. In an embodiment, a paddle (e.g.32or40) may be positioned in the middle of the device200. In an embodiment, a tongue deviating paddle32may be disposed within the device200such that certain freedom of movement (adjustment ability) of the tongue deviating paddle32within the device200is ensured. Ball joints19or21which are included in the struts14and16, respectively, and similar structures34in tongue deviating paddle32, or42B and44B in the tongue-depressing paddle40, may be configured to allow a desired range of motion of a protective element (e.g. tongue paddles32R,32L, or40) relative to the dental arch assembly60. Such ball joints may be secured to their respective struts or paddles. In an embodiment, such assembly and fixation goal may be accomplished using a bonding agent, such as a curable bonding compound (e.g., a cyanoacrylate composition) once a tongue paddle (e.g., paddle32L,32R, or40) is positioned in a desired location. For example, the tongue-deviating paddle32may be coupled through the mount34with the rod22disposed through the strut14. The front end32A of the tongue-deviating paddle32may be connected with the midline rod30A disposed through the anterior strut26. The described structure allows for movement at the front end32A of tongue-deviating paddle32when the midline rod30A is moved. The excess part of a midline rod30A may be removed (e.g., cut off or snapped off at broken line51) once the tongue-deviating paddle (e.g.32) is secured in a desired position, to provide a new second end30A3of midline rod30A. Similarly, in an intra-oral device300using a tongue-depressing paddle40, any excess part of a midline rod30may be removed (e.g., cut off or snapped off at broken line51) once the tongue-depressing paddle (e.g.40) is secured in a desired position, to provide a new second end303of midline rod30. As shown inFIG.2, and further described below, the tongue-deviating paddle32may be disposed generally vertically relative to the intermaxillary supporting dental arch assembly60. While in thisFIG.2the tongue-deviating paddle32illustrated is configured to provide a tongue deviation to the left (relative to the patient), a “right hand” version of a tongue-deviating paddle32R may be configured and mounted similarly to that of the “left hand” version32L. In an embodiment, the upper dental arch member10and/or lower dental arch member12may be configured to provide independent support for a protective element such as tongue-deviating paddle32R or32L or tongue-depressing paddle40. For example, a tongue-deviating paddle32may be attached to a middle section (somewhere about the center of the U-shape) of the upper dental arch member10or to a middle section (somewhere about the center of the U-shape) of the lower dental arch member12. Improved versions of the tongue-deviating paddle are shown inFIGS.17A-17D(132R for movement of the tongue to the right) and18A-18D (132L for movement of the tongue to the left). Such a tongue-deviating paddle132R or132L may be disposed in a roughly vertical configuration, such as depicted inFIGS.1and2. However, note that the tongue-deviating paddles need not be oriented roughly vertically, and may be rotated to any desired angle. As noted inFIG.1, and elsewhere herein, a tongue-deviating paddle noted with reference numeral132R may be configured for urging a tongue to a patient's right. For example, as noted inFIG.2and elsewhere herein, a tongue-deviating paddle noted with reference numeral132L may be configured for urging a tongue to a patient's left. In an embodiment, one of the uses of a tongue-deviating paddle, whether132R or132L, is to move a patient's tongue away from a side of the patient's mouth that has a cancer to be treated by radiation. Various embodiments for an intra-oral device100or200may be configured such that the position of a tongue-deviating paddle132R or132L may be adjusted to allow for customizing the device100or200to the particular size and shape of the mouth and tongue of a particular patient. As seen inFIGS.17A through17D, a tongue-depressing paddle132R may be provided with a tongue protection element141. Such tongue protection element141may be provided in a generally oval-shape, or with a rounded triangular shape, or with a trapezoidal-shape, as suitable in particular circumstances. In an embodiment, joints or concave depressions142and/or144may be provided, and mounted on a first or mounting side143of the paddle132R. The joints142and/or144may be provided as concave depressions formed through the surface143of the paddle body141so that the support struts126generally such as shown and described inFIGS.16A-16Gabove, and the ball joints134specifically, are provided with spherical freedom of movement within joints142and/or144. The joints142,144may be spaced along the length of the paddle body141so as to accommodate mouths of different sides and depths and to give some range of motion with spacing preferred between 0 and 2 cm between centers of the depressions. As seen inFIGS.18A through18D, and as with the right-version of the tongue-pressing paddle132R, a left tongue-depressing paddle132L may be provided with a tongue protection element141. Such tongue protection element141may be provided in a generally oval-shape, or with a rounded triangular shape, or with a trapezoidal-shape, as suitable in particular circumstances. In an embodiment, joints or concave depressions142and/or144may be provided, and mounted on a first or mounting side143of the paddle132L. The joints142and/or144may be provided as concave depressions formed through the surface143of the paddle body141so that the support struts126generally such as shown and described inFIGS.16A-16Gabove, and the ball joints134specifically, are provided with spherical freedom of movement within joints142and/or144. The joints142,144may be spaced along the length of the paddle body141so as to accommodate mouths of different sides and depths and to give some range of motion with spacing preferred between 0 and 2 cm between centers of the depressions. For both right and left tongue-pressing paddles132R and132L, a midline rod130couples to a proximal portion of the tongue protection element body141and includes a groove131formed therein that passes along the length of the rod130. This groove131inserts within a complementary groove formed within the interior surface of the midline ball joint127to better lock in the deviation angle of the paddle and prevent slippage. FIGS.19A and19Billustrate the adjustable engagement between the paddle132R and gimbal119via an improved threaded strut126. The male-threaded rod portion122of the strut126is threaded into the female-threaded portion of gimbal119at the desired distance. FIG.19Aillustrates a small deviation of the paddle body141where the supporting strut126is threaded to a point along the rod122close to the ball-joint head134.FIG.19B, in contrast, illustrates a large deviation of the paddle body where the supporting strut126is threaded within the gimbal119to a point along rod122further way from the ball-joint head134. At the position noted inFIG.19B, the padded body141moves the tongue further to the left side of the mouth than the position shown inFIG.19A. The gimbal119is captured between the upper and lower arches110,112[FIGS.20A-20B and21A-21B] similar to the way strut14is sandwiched between arches10and12inFIG.4. Once the strut126is hand-threaded into gimbal119the desired distance, the ball-joint end134of strut122then is snapped into the concave depression142formed in the paddle body141so that the strut122is freely and rotationally moveable within the mounting surface142. Turning again toFIG.3, an embodiment of an intra-oral device300will now be further described.FIG.3illustrates an example assembly for an intra-oral device300that includes a tongue-depressing paddle40. Similar to the tongue-deviating paddles32, the tongue-depressing paddle40may utilize a midline rod30that fits through a ball joint27of an anterior strut26. In an embodiment, a tongue-depressing paddle40may be generally a rounded triangular, trapezoidal or oval-shaped and may be positioned generally horizontally as shown inFIG.3. Like the tongue-deviating paddles32described above, a tongue-depressing paddle40may be loosely fitted intra-orally into the dental arch assembly60(formed by the upper dental arch member10and lower dental arch member12), and positioned using the midline rod30. Once the tongue-depressing paddle40is in a selected position, the tongue-depressing paddle40may be immobilized and thus fixed in place via midline rod30and anterior strut26. Then, the midline rod30may be shortened as desired. Also, posterior stabilizing rods22and24may be added for additional strength, and secured to the dental arch assembly60and to the tongue-depressing paddle40. Suitable locking mechanisms or curable bonding agents or the like as mentioned elsewhere herein may be utilized as appropriate to secure and ensure the intended service of the intra-oral device. Turning again toFIGS.1and2, in an intra-oral device100or200as set out in such drawing figures, respectively, the upper dental arch member10defines an upper plane90approximating a plane along the occlusal surfaces92of a patent's maxillary teeth50or edentulous arch. In various embodiments, as may be understood by additional reference toFIG.8, a protective element PE, including protective portion PP (e.g., tongue-deviating paddle32R or32L) and connector portion CP (e.g., midline rod30or30A) may be deployed in a configuration roughly perpendicular to the upper plane90. In various embodiments, such roughly perpendicular configuration will vary, anywhere from a precisely perpendicular orientation at ninety (90) degrees to upper plane90, up to as much of an offset as plus or minus forty five (45) degrees from a perpendicular orientation. Turning toFIG.1for orientation with respect to dental arch member60, and toFIGS.3,4, and7as regards an intra-oral device300, the lower dental arch member12defines a lower plane94(seeFIG.1) approximating a plane along the occlusal surfaces96of a patent's mandibular teeth50or edentulous arch. In various embodiments, as may be understood by additional reference toFIG.4, a protective element PE, including protective portion PP (e.g., tongue-depressing paddle40) and connector portion CP (e.g., midline rod30A) may be deployed in a configuration with protective portion PP oriented roughly parallel to the lower plane90. In various embodiments, such roughly parallel configuration will vary, anywhere from a precisely parallel orientation to lower plane94, in many embodiments, up to as much of a downward or upward angle (using connector portion CP such as midline rod30A for evaluation of the angle) of plus or minus forty five (45) degrees from a parallel orientation. In an embodiments for an intra-oral device300utilizing a tongue-depressing paddle40, first posterior stabilizing rod22and second posterior stabilizing rod24may be structured in an anhedral configuration, where the rods22and24are extending upward from their respective guides G at struts14and16toward tongue-depressing paddle40. In an embodiments for an intra-oral device300utilizing a tongue-depressing paddle40, first posterior stabilizing rod22and second posterior stabilizing rod24may be structured in a dihedral configuration, where the rods22and24are extending downward from their respective guides G at struts14and16toward tongue-depressing paddle40. In an embodiments for an intra-oral device300utilizing a tongue-depressing paddle40, first posterior stabilizing rod22and second posterior stabilizing rod24may be structured in a neutral configuration, where the rods22and24extending substantially horizontally from their respective guides G at struts14and16toward tongue-depressing paddle40. Any of the dental arch assemblies60, and other components used in intra-oral devices100,200or300may be customized for a particular patient. Similarly, the shape of a tongue paddle (e.g. paddles32L,32R, or40) may be adjusted (by material removal, or/and by material addition) to optimize the particular shape of the device to fit a patient's tongue or their other oral tissue limitations (surgical scars, for example) for comfort and/or ideal management. In operation, when the customized device is inserted into the patient's mouth, the tongue paddle (e.g. paddles32or40) will shift the location of a patient's tongue so as to either avoid or reduce adverse effects of head and neck cancer radiation treatment, thus protecting or stabilizing the tongue tissues. The materials selected may optimally be capable of withstanding several weeks of daily high-dose radiation exposure. In an embodiment, an intra-oral device100,200or300may be manufactured of a radiation resistant material (thus, in an embodiment, having low radiation absorption and scatter. Accordingly, an intra-oral device100,200or300may be manufactured using any suitable material, for example plastics, acrylics, carbon fibers, or other materials having properties consistent with applicable requirements, including various governmental regulations for medical treatment devices used in oral service in humans. The fill-in material11F and13F for the upper member10and lower member12as described above in reference toFIG.13A, may include a suitable material having a moldable property. For example, the fill-in material may be made of a customizable material such as Triad® acrylic, a polyether, or polyvinylsiloxane, or other functionally similar materials. When the device200or300is inserted into the patient's mouth, the patient “molds” the surfaces10M on the upper member10, and12M on the lower member12, by biting into the fill-in material, which in an embodiment, may be subsequently hardened, for example either autocatalytically or via application of a bright photoactivating light. In this manner, the surfaces10M and12M replicate the occlusal surfaces of a particular patient's maxillary teeth50and mandibular teeth52or comparable edentulous arch forms. The devices200or300may also be customized, such as by addition of light-cured acrylic to add devices such as lead-lined lip bumpers, cheek bumpers near metallic crowns, and the like. FIGS.20A and20Billustrate an improved version of upper dental arch10described above. The improved upper dental arch member110may be separately provided in a kit400(e.g., seeFIG.14), whereby a selection of upper dental arch members110may be provided in preselected sizes, having a configuration complementary in size and shape to that of maxillary dental arch dimensions found in a selected group of anticipated patients. In an embodiment, an upper dental arch member110may be provided in a generally U-shaped (e.g., horseshoe shaped) configuration. An upper dental arch member110may be provided in various sizes, such as small, medium, large, or other sizes. Such arch member110is preferably formed of injection-molded polycarbonate and/or Lexan HPS1. As may be appreciated fromFIGS.2,3,4and13A, in an embodiment, an upper dental arch member110may have an upper side110U. An upwardly directed upper receiving trough111may be disposed on or in the upper side110U. As seen inFIG.13A, the upper receiving trough111may be adapted to receive a fill-in material11F, which fill-in material11F may be molded to customize the fit of the upper dental arch member110to an individual patient's maxillary teeth50(seeFIG.1) or edentulous maxillary arch. The upper receiving trough may be variously adapted to receive a moldable compound for fabrication into a bite pad for secure receipt of the occlusal surface of a patient's maxillary teeth or edentulous arch. Retention of the bite pad (e.g. element10M shown inFIG.13A) within the trough111is preferably accomplished by way of keyways or apertures formed through the trough side and/or bottom surface. The trough111of the improved upper dental arch member110includes an array of keyway apertures, such as apertures113, that pass from and through the upper side110U to the lower side110L of upper dental arch member110. A moldable compound such as described above is placed within the trough111and heated up as by soaking the assembly in hot water (e.g. around 85-95° C.) for between about 30 and 60 seconds. The assembly is then placed in a patient's mouth and the patient bites down onto the moldable compound material for 15 seconds to form teeth impressions. The pressure of the patient's bite not only forms the teeth impression10M on the top surface of the moldable compound, but also forces a portion of the moldable material through the keyway apertures113(see, e.g.FIG.13A). Depending upon the current state and viscosity of the moldable compound, the materials can completely fill the passageway113between the top and bottom surfaces of the dental arch and/or mushroom out the opposite side to help lock the moldable material in place within the trough111. The assembly is then rinsed in cold water to set the impression as well as the extruded portions115that flowed through the keyway apertures113. The improved upper dental arch member110is also formed with pins or pegs117that extend out the lower side110L. These pegs117are positioned on the underside of arch member110in a pattern that matches complementary structures formed on the lower dental arch member. As shown inFIG.21A, these complementary structures take the form of channels or apertures121that receive respective pins117within and lock and align the upper dental arch110to the lower dental arch112. Preferred embodiments of the upper dental arch110include a pair of pins117on each rear portion of the arch and a set of three pins arranged in an asymmetric pattern at the front of the arch. When the upper dental arch member110is locked together with the lower dental arch member112, as via pins117and apertures121, the combined intra-oral device define annular grooves123,125into which the gimble119and ball joint127are installed, respectively. FIGS.21A and21Billustrate an improved version of lower dental arch12described above. The improved lower dental arch member112may be separately provided in a kit400(e.g., seeFIG.14), whereby a selection of lower dental arch members112may be provided in preselected sizes, having a configuration complementary in size and shape to that of maxillary dental arch dimensions found in a selected group of anticipated patients. In an embodiment, an lower dental arch member112may be provided in a generally U-shaped (e.g., horseshoe shaped) configuration. An lower dental arch member112may be provided in various sizes, such as small, medium, large, or other sizes. As may be appreciated fromFIGS.2,3,4and13A, in an embodiment, a lower dental arch member112may have an upper side112U. An upwardly directed upper receiving trough111may be disposed on or in the upper side112U. As seen inFIG.13B, the upper receiving trough111may be adapted to receive a fill-in material12F, which fill-in material12F may be molded to customize the fit of the lower dental arch member112to an individual patient's maxillary teeth50(seeFIG.1) or edentulous maxillary arch. The lower receiving trough may be variously adapted to receive a moldable compound for fabrication into a bite pad for secure receipt of the occlusal surface of a patient's maxillary teeth or edentulous arch. Retention of the bite pad (e.g. element10M shown inFIG.13A) within the trough111is preferably accomplished by way of keyways or apertures formed through the trough side and/or bottom surface as noted above with respect to the upper dental arch bite pad. That is, the trough111of the improved lower dental arch member112includes an array of keyway apertures, such as apertures113, that pass from and through the upper side112U to the lower side112L of lower dental arch member112. A moldable compound such as described above is placed within the trough111and heated up as by soaking the assembly in warm or hot water (e.g. around 80° C.) for between about 30 and 60 seconds. The assembly is then placed in a patient's mouth and the patient bites down onto the moldable compound material for 15 seconds to form teeth impressions. The pressure of the patient's bite not only forms the teeth impression12M on the top surface of the moldable compound, but also forces a portion of the moldable material through the keyway apertures113(see, e.g.FIG.13B). Depending upon the current state and viscosity of the moldable compound, the materials can completely fill the passageway113between the top and bottom surfaces of the dental arch and/or mushroom out the opposite side to help lock the moldable material in place within the trough111. The assembly is then rinsed in cold water to set the impression as well as the extruded portions115that flowed through the keyway apertures113. It is preferred that the upper and lower dental arches be press-fit together as by the pins117and holes121described above and the moldable materials placed in the troughs111of both the upper and lower dental arches110,112. The patient may then bite down on both arches and form the molded impressions10M and12M at the same time while also forcing a portion of the moldable material through keyways113to help lock the moldable material within the dental arch troughs111. Molded annular arches formed on the lower sides110L and112L at the back end of each arch receive a gimbal119for movement within. In the foregoing description, numerous details have been set forth in order to provide a thorough understanding of the disclosed exemplary embodiments for an intra-oral device for positioning certain oral tissue during radiation treatment. The purpose of the intra-oral devices described here is to provide a wide range of flexibility to give the end user of the device as much latitude to customize and idealize its application for the maximum benefit of the patient. However, certain of the described details may not be required in order to provide useful embodiments, or to practice selected or other disclosed embodiments. Further, the description may include, for descriptive purposes, various relative terms such as surface, at, adjacent, proximity, near, on, onto, and the like. Such usage should not be construed as limiting. Terms that are relative only to a point of reference are not meant to be interpreted as absolute limitations, but are instead included in the foregoing description to facilitate understanding of the various aspects of the disclosed embodiments. Various components are described which may be employed alternatively, yet be included in a kit or product package to enable an end user to select the optimal components for use in a particular situation. Accordingly, procedures utilizing the intra-oral device described herein, and the method(s) described herein may be utilized as multiple discrete operations, in a manner that is most helpful in a particular circumstance. However, the order of description should not be construed as to imply that such alternatives are necessarily order dependent, or that use of various components is necessarily in the alternative. Also, the reader will note that the phrase “in one embodiment” has been used repeatedly. This phrase generally does not refer to the same embodiment; however, it may. Finally, the terms “comprising”, “having” and “including” should be considered synonymous, unless the context dictates otherwise. Various aspects and embodiments described and claimed herein may be modified from those shown without materially departing from the novel teachings and advantages provided by this invention, and may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Embodiments presented herein are to be considered in all respects as illustrative and not restrictive or limiting. This disclosure is intended to cover methods and apparatus described herein, and not only structural equivalents thereof, but also equivalent structures. Modifications and variations are possible in light of the above teachings. Therefore, the protection afforded to this invention should be limited only by the claims set forth herein, and the legal equivalents thereof. Having described and illustrated the principles of the invention in a preferred embodiment thereof, it should be apparent that the invention can be modified in arrangement and detail without departing from such principles. I claim all modifications and variation coming within the spirit and scope of the following claims.
62,358
11857803
DETAILED DESCRIPTION OF EMBODIMENTS An aspect of some embodiments of the invention relates to radiotherapy sources carrying alpha emitting atoms in a manner which allows desorption of daughter radionuclides with a significant probability (e.g., at least 1%), but the desorption probability is lower than 30%. With a low desorption probability, the activity on the source can be increased without changing the radon release rate and the resulting systemic alpha radiation reaching distant healthy tissue. The increase in activity on the source increases the beta radiation provided by the source, which supplements the alpha radiation in the destruction of tumor cells. FIG.1is a schematic illustration of a radiotherapy source21, in accordance with an embodiment of the present invention. Radiotherapy source21comprises a support22, which is configured for insertion into a body of a subject, and radionuclide atoms26of an alpha-emitting substance, such as radium-224, an outer surface24of support22. It is noted that for ease of illustration, atoms26as well as the other components of radiotherapy source21, are drawn disproportionately large. In some embodiments, a coating33covers support22and atoms26, in a manner which controls a rate of release of the radionuclide atoms26and/or of daughter radionuclides of atoms26, upon radioactive decay. In some embodiments, as shown inFIG.1, in addition to coating33, an inner coating30of a thickness T1is placed on support22and the radionuclide atoms26are attached to inner coating30. It is noted, however, that not all embodiments include inner coating30and instead the radionuclide atoms26are attached directly to the source21. Likewise, some embodiments do not include coating33. Support22comprises, in some embodiments, a seed for complete implant within a tumor of a patient, and may have any suitable shape, such as a rod or plate. Alternatively to being fully implanted, support22is only partially implanted within a patient and is part of a needle, a wire, a tip of an endoscope, a tip of a laparoscope, or any other suitable probe. In some embodiments, support22is cylindrical and has a length of at least 2 millimeters, at least 5 millimeters or even at least 10 millimeters. Optionally, support22has a length which is smaller than 70 mm, smaller than 60 mm or even smaller than 40 mm (millimeters). Support22optionally has a diameter of 0.7-1 mm, although in some cases, sources of larger or smaller diameters are used. Particularly, for treatment layouts of small spacings, support22optionally has a diameter of less than 0.7 mm, less than 0.5 mm, less than 0.4 mm or even not more than 0.3 mm. Typically, the radionuclide, the daughter radionuclide, and/or subsequent nuclei in the decay chain are alpha-emitting, in that an alpha particle is emitted upon the decay of any given nucleus. For example, the radionuclide may comprise an isotope of Radium (e.g., Ra-224 or Ra-223), which decays by alpha emission to produce a daughter isotope of Radon (e.g., Rn-220 or Rn-219), which decays by alpha emission to produce an isotope of Polonium (e.g., Po-216 or Po-215), which decays by alpha emission to produce an isotope of Lead (e.g., Pb-212 or Pb-211), as described, for example, in U.S. Pat. No. 8,894,969, which is incorporated herein by reference. Alternatively, the radionuclide comprises Actinium-225. An amount of radiation supplied by radiotherapy device21to surrounding tissue depends on various parameters of the radiotherapy device. These include:1) a desorption probability of daughter atoms of radionuclide atoms26, upon decay,2) a rate of release of radionuclide atoms26by diffusion, and3) an amount of radionuclide atoms26on the source It is noted that while the risk of an overdose of radiation for a single small tumor is low, when treating large tumors and/or multiple tumors, the treatment may include implantation of several hundred sources. Therefore, the radiation provided by the sources is adjusted to prevent administering an overdose of radiation to the patient. The amount of radionuclide atoms26in radiotherapy device21is generally given in terms of activity per centimeter length of support22. The activity is measured herein in units of microcurie per centimeter length of the source. As the radiation dose reaching most of the tumor is dominated by radionuclides that leave the source, a measure of “radon release rate” is defined herein as the product of activity on the source and the desorption probability. For example, a source with 2 microcurie activity per centimeter length and a 40% desorption probability has a radon release rate of 0.8 microcurie per centimeter length. The radon release rate of the source is typically at least 0.5, at least 1 or even at least 2 microcurie per centimeter length. Generally, the radon release rate is not more than 4 microcurie per centimeter length. In some embodiments, however, radon release rates of more than 4 microcurie per centimeter length, more than 4.5 microcurie per centimeter length, more than 5 microcurie per centimeter length, or even more than 6 microcurie per centimeter length are used, as applicant has identified that the risks of the radionuclides reaching remote healthy tissue are lower than previously assumed. Optionally, the radon release rate is selected according to the specific type of the tumor. Specific radon release rates which may be used are described, for example, in U.S. patent application Ser. No. 17/343,786, which is titled: “Activity Levels for Diffusing Alpha-Emitter Radiation Therapy”, which is incorporated herein by reference. Any suitable technique, such as any one or more of the techniques described in the aforementioned '969 patent to Kelson, may be used to couple atoms26to support22. For example, a generating source that generates a flux of the radionuclide may be placed in a vacuum near support22, such that nuclei recoiling from the generating source traverse the vacuum gap and are collected onto, or implanted in, surface24. Alternatively, the radionuclide may be electrostatically collected onto support22, by the application of a suitable negative voltage between the generating source and the support. In such embodiments, to facilitate the electrostatic collection of the radionuclide, support22may comprise an electrically-conductive metal, such as titanium. For example, support22may comprise an electrically-conducting metallic wire, needle, rod, or probe. Alternatively, support22may comprise a non-metallic needle, rod, or probe coated by an electrically-conductive metallic coating that comprises surface24. In the prior art, attempts were made to maximize the desorption probability in order to maximize tissue destruction and avoid waste of radionuclides that do not enter the tumor. In accordance with embodiments of the invention, the desorption probability is purposely set to lower than possible, in order to increase the ratio of beta radiation to alpha radiation provided by radiotherapy device21. The desorption probability is optionally lower than 30%, lower than 25%, lower than 20%, lower than 15%, lower than 13% or even lower than 10%. On the other hand, the desorption probability is preferably not too low and is optionally greater than 2%, greater than 4%, greater than 6% or even greater than 8%. In some embodiments, the desorption probability is greater than 10%, greater than 12% or even greater than 15%. The desorption probability depends on the strength of the bond of radionuclide atoms26to support22and/or the type and thickness of coating33. In some embodiments, the reduced desorption probability is achieved by using an increased bond strength, while the coating is substantially the same as used for a high desorption probability, e.g., a thickness of less than 3 microns of a biocompatible PDMS (polydimethylsiloxane). The bond of the radionuclide atoms26to support22is generally achieved by heat treatment of the radiotherapy device21, and the strength of the bond is controllable by adjusting the temperature and/or duration of the heat treatment. In some embodiments, the temperature used is at least 50° C., at least 100° C. or even at least 200° C., above the temperature used to achieve a desorption probability of about 38-45%. Alternatively or additionally, the heat treatment is performed at a lower pressure of below 101 millibar, below 10-2 millibar, or even less than 10−3millibar, and/or the heat treatment is performed for a longer duration, for example at least 10 minutes, at least 20 minutes, at least 40 minutes or even at least an hour beyond the duration required to achieve a desorption probability of about 38-45%. Alternatively or additionally to reducing the desorption probability by altering the heat treatment, any other suitable method may be used to reduce the bond strength. In some embodiments, the fixation of the radionuclides to the seed surface is performed in a noble gas environment or a vacuum environment. The fixation may be performed in any suitable pressure. The heat treatment is optionally applied for at least 10 minutes, at least 30 minutes, at least an hour, at least 3 hours or even at least 10 hours. The temperature of the heat treatment optionally depends on the pressure, the environment in which the radionuclides are fixated to the surface and the duration of the fixation process. In some embodiments, the temperature depends on the material of the seed surface. In other embodiments, the bond strength is substantially the same as used for a desorption rate of about 38-45% and the reduced desorption probability is achieved by altering coating33in order to reduce the desorption probability to the desired level. For example, in some embodiments, coating33comprises a layer of a polymer, which is highly permeable to the daughter radionuclide (e.g., Radon), such as a biocompatible PDMS (polydimethylsiloxane), so that the daughter radionuclide may diffuse through coating33. For example, the diffusion coefficient of the daughter radionuclide in the polymer of coating33may be at least 10−11cm2/sec. In these embodiments, the thickness TO of coating33is optionally greater than 20 microns, greater than 50 microns, greater than 100 microns, greater than 200 microns, or even greater than 300 microns. Alternatively or additionally to PDMS (polydimethylsiloxane), coating33comprises any other suitable material which is permeable to the daughter radionuclide, such as polypropylene, polycarbonate, polyethylene terephthalate, poly(methyl methacrylate), and/or polysulfone, that coats surface24and thus covers atoms26. In other embodiments, coating33comprises one or more layers of materials which are considerably less permeable to radon than PDMS. In some of these embodiments, coating33is a low-diffusion polymer (e.g., parylene-n) having a thickness of at least 0.2 microns, at least 0.5 microns, at least 1 micron or even at least 2 microns. It is noted, however, that the coating is not too thick, in order to still allow the desired rate of desorption of Radon, such that the coating optionally has a thickness of less than 100 microns, less than 20 microns, less than 5 microns, or even less than 3 microns. In some embodiments, the coating has a thickness of less than 2 microns, less than 1 micron or even less than 0.75 microns. Low-diffusion polymers are polymers in which Radon diffuses to a depth of less than 5 microns. In some embodiments, polymers with even lower diffusion depths are used, for example, less than 2 microns, less than 1 micron or even less than 0.5 microns. Other embodiments of low permeability coatings include an atomic layer deposition (e.g., by Al2O3). The atomic layer deposition optionally has a thickness of at least 2 nanometers, at least 3 nanometers or even at least 5 nanometers. Optionally, the atomic layer deposition has a thickness of less than 15 nanometers or even less than 10 nanometers. Optionally, in the above embodiments, coating33comprises a non-metallic coating which does not include metals. This is because applicant found metal coatings to be hard to work with and of low predictability of results. In other embodiments, however, coating33is partially or entirely a metal coating, such as titanium. Applicant found that a metal coating of suitable thickness can achieve low desorption probabilities of the daughter radon radionuclides. The desired desorption rate is achieved, in still other embodiments, by a combination of a stronger bond (for example due to the heat treatment) and the properties of coating33. For example, coating33may have a thickness greater than used for a desorption rate of about 38-45%, such as greater than 4 microns, greater than 6 microns, greater than 10 microns, greater than 20 microns, or even greater than 40 microns, but still less than 100 microns or even less than 60 microns. The additional decrease in the desorption rate is optionally achieved by changing one or more properties of the heat treatment. The rate of release of radionuclide atoms26, e.g., by diffusion, is, in some embodiments, very low and even negligible. In other embodiments, a substantial rate of diffusion of radionuclide atoms26is used, for example using any of the methods described in PCT publication WO 2019/193464, titled: “Controlled Release of Radionuclides”, which is incorporated herein by reference. The diffusion is optionally achieved by using for coating33, a bio-absorbable coating which initially prevents premature escape of radionuclide atoms26but after implantation in a tumor disintegrates and allows the diffusion. The rate of release of radionuclide atoms26is optionally lower than the rate of release of daughter radionuclides due to desorption, and is preferably less than 50%, less than 30% or even less than 10% of the rate of release of daughter radionuclides due to desorption. Typically, the density of atoms26on outer surface24is between 1011and 1014atoms per square centimeter. The activity of the source is optionally selected according to the desorption rate so that the desired radon release rate is achieved. In some embodiments, the seed has a concentration of radionuclides of at least 5 μCi per centimeter length, at least 7 μCi per centimeter length, at least 8 μCi per centimeter length, or even at least 10 μCi per centimeter length, at least 11 μCi per centimeter length, at least 12 μCi per centimeter length or even at least 14 μCi per centimeter length. Optionally, the concentration of radionuclides is not higher than 15 μCi per centimeter length and in some embodiments is less than 13 μCi per centimeter length. In other embodiments, however, the concentration of radionuclides is above 15 μCi per centimeter length. The beta radiation due to radiation device21carrying radium-224 results from decay of lead-212 into bismuth-212 and decay of bismuth-212 into polonium-212, or decay of bismuth-212 into thallium-208, which emits an electron when it decays to lead-208. Some of the beta radiation comes from daughter radionuclides still attached to the source, while another part of the beta radiation comes from daughter radionuclides in the tumor, after they or one of their ancestor radionuclides escaped device21. It is noted, however, that some of the lead-212 that reaches or is created in the tumor is cleared from the tumor through the blood stream before it has a chance to decay. Use of a relatively low desorption probability in accordance with embodiments of the present invention allows for increasing the beta radiation reaching the tumor cells in two ways. First, the low desorption probability allows for increasing the activity of radium on device21, in a manner which increases the beta radiation but does not increase the side effects of alpha radiation of lead-212 that leaves the tumor through the blood stream. Second, the low desorption probability reduces the amount of lead-212 that leaves the tumor through the blood stream and therefore does not provide beta radiation. While Beta radiation has a larger range than alpha radiation, it still decreases quite sharply with distance from he source. As described in Lior Arazi, “Diffusing Alpha-Emitters Radiation Therapy: Theoretical and Experimental Dosimetry”, Thesis submitted to the senate of Tel Aviv University, September 2008, the disclosure of which is incorporated herein by reference, for a radiation device21having a radium activity of 3 microcurie per centimeter, the beta radiation contributes an asymptotic dose of about 10 Gy at a distance of 2 millimeters from the source. Increasing the radium activity of device21to 9 microcurie per centimeter length would bring the beta contribution to about 30 Gy at a distance of 2 millimeters from the device21. For a hexagonal arrangement with a spacing of 4 millimeters, each point in the tumor would receive beta radiation from three sources, and thus would receive at least about 90 Gy. Beta radiation is less destructive than alpha radiation, by a factor considered to be between about 5-10, such that this 90 Gy is equivalent to about 9-18 Gy from alpha radiation. Therefore, beta radiation can provide emissions of a therapeutic level without increasing the radon release rate beyond its desired level. In some embodiments, the radiation device21is designed to provide at a distance of 2 millimeters from the device, in a tumor with negligible lead clearance through the blood stream, at least 18 Gy, at least 20 Gy, at least 24 Gy, at least 28 Gy or even at least 30 Gy. The alpha radiation provided by the radiation device21providing these beta radiation levels is optionally at least 10 Gy or even at least 20 Gy at a distance of 2 millimeters from the device. In some embodiments, the alpha radiation provided by the radiation device21is less than 100 Gy, less than 60 Gy or even less than 40 Gy. This alpha radiation is optionally provided by a radiation device21having a radon release rate of at least 0.5 microcurie per centimeter length, but lower than 4 microcurie per centimeter length, lower than 3 microcurie per centimeter length, lower than 2.5 microcurie per centimeter length or even lower than 2 microcurie per centimeter length. In some embodiments, the ratio between the asymptotic dose at a distance of 2 millimeters from the device, in a tumor with negligible lead clearance through the blood stream to the radon release rate of the device is greater than 15 Gy/(microcurie/cm), greater than 20 Gy/(microcurie/cm), greater than 25 Gy/(microcurie/cm), or even greater than 30 Gy/(microcurie/cm). In the above description, the beta radiation is provided by progeny of the alpha emitting radionuclides that provide the alpha radiation. Generally, at least 90%, at least 95% or even at least 99% of the beta radiation is due to the alpha emitting radionuclides. Alternatively or additionally to using beta radiation from the radionuclides which provide the alpha radiation to supplement the alpha radiation, the radiation doses discussed above are achieved by a device in which beta radiation is supplied by separate radionuclides which do not supply therapeutically effective alpha radiation. FIG.2is a schematic illustration of a combined alpha-radiation and beta-radiation source50, in accordance with an embodiment of the invention. Source50comprises a capsule54which encapsulates a radioactive material52of one or more radioisotopes, which emit beta and/or gamma radiation. Alpha-emitting radionuclide atoms26are attached to an outer surface of capsule54, in a manner which allows their daughter radionuclides to leave the source50with a desired desorption probability, upon radioactive decay. In some embodiments, radionuclide atoms26are covered by a coating33, as discussed above regardingFIG.1. As shown, source50does not include a coating30between the surface of capsule54and radionuclide atoms26. In some embodiments, however, a coating30is included between capsule54and radionuclide atoms26. Capsule54optionally comprises a sealed container which does not prevent exit of beta and/or gamma radiation therefrom. Capsule54optionally comprises a metal, such as gold, stainless steel, titanium and/or platinum. Alternatively, capsule54comprises a plastic, such as described in U.S. Pat. No. 7,922,646, titled “Plastic Brachytherapy sources”, which is incorporated herein by reference. Optionally, in accordance with this alternative, the plastic capsule is coated by a thin metal coating to which radionuclide atoms26are attached. Capsule54is of any suitable size and/or shape known in the art, such as described, for example in U.S. Pat. No. 6,099,458, titled: “Encapsulated Low-Energy Brachytherapy Sources” and/or U.S. Pat. No. 10,166,403, titled: “Brachytherapy Source Assembly”, the disclosures of which are incorporated herein by reference. Radioactive material52comprises one or more radioactive isotopes which emit beta radiation, such as iridium-192, californium-252, gold-198, indium-114, phosphorus-32, radium-226, ruthenium-106, samarium-145, strontium-90, yttrium-90, tantalum-182, thulium-107, tungsten-181 and/or ytterbium-169. Alternatively, radioactive material52comprises one or more radioactive isotopes which emit gamma radiation, such as iodine 125 (I-125), palladium 103 (Pd-103), cesium 131 (Cs-131), cesium 137 (Cs-137) and/or cobalt 60 (Co-60). Other suitable radioactive materials known in the art may also be used, as well as combinations of a plurality of beta emitters, combinations of a plurality of gamma emitters, combinations of a beta emitters and gamma emitters and/or one or more substances which emit both beta and gamma radiation. The activity of radioactive material52and the thickness of the walls of capsule54are selected to achieve a sufficient amount of radiation at a distance of about 3-4 mm from source50. Optionally, radioactive material52has an activity level of at least 0.5 mCi (millicurie), at least 5 mCi, at least 20 mCi, or even at least 50 mCi. In some embodiments, the activity of radioactive material52is substantially higher, above 100 mCi, above 200 mCi or even above 500 mCi. In some embodiments, radioactive material52fills capsule54. Alternatively, radioactive material52is placed as an inner coating on the walls of capsule54. FIG.3is a schematic illustration of a combined alpha-radiation and beta-radiation source80, in accordance with another embodiment of the invention. Source80comprises a base82which has beta-emitting radionuclides84attached thereto, directly or through one or more coatings. Alpha-emitting radionuclides86are placed above beta-emitting radionuclides84, either directly attached to the beta-emitting radionuclides84or placed on a coating which separates beta-emitting radionuclides84from alpha-emitting radionuclides86. FIG.4is a schematic illustration of a combined alpha-radiation and beta-radiation source90, in accordance with still another embodiment of the invention. In source90, beta-emitting radionuclides84and alpha-emitting radionuclides86are spread out on the surface of base82. In sources80and90, the beta-emitting radionuclides84are mounted on base82in a manner which substantially prevents their escape from source80. In contrast, alpha-emitting radionuclides86are mounted on base82in a manner which allows escape of daughter radionuclides from source80upon decay. In sources50,80and90, the daughter radionuclides optionally escape source80with a desorption probability of at least 30%, at least 35% or even at least 40% and the activity of alpha-emitting radionuclides86is set accordingly to levels known in the art for such desorption probability levels, lower than those discussed above regarding radiotherapy device21. This is because, in the embodiments of sources50,80and90, the beta radiation is optionally supplied mainly by beta-emitting radionuclides84and the alpha-emitting radionuclides86are not trusted for beta radiation. Alternatively, a desired level of beta radiation, for example at least 60 gray (Gy), at least 70 Gy or even at least 80 Gy, is supplied by a combination of beta radiation from beta-emitting radionuclides84and alpha-emitting radionuclides86. In some embodiments, at least 10%, at least 20%, at least 30% or even at least 40% of the beta radiation emitted by sources50,80and90is emitted from alpha-emitting radionuclides86. Alternatively or additionally, at least 10%, at least 20%, at least 30% or even at least 40% of the beta radiation emitted by sources50,80and90is emitted from beta-emitting radionuclides84. CONCLUSION It will be appreciated that the above described methods and apparatus are to be interpreted as including apparatus for carrying out the methods and methods of using the apparatus. It should be understood that features and/or steps described with respect to one embodiment may sometimes be used with other embodiments and that not all embodiments of the invention have all of the features and/or steps shown in a particular figure or described with respect to one of the specific embodiments. Tasks are not necessarily performed in the exact order described. It is noted that some of the above described embodiments may include structure, acts or details of structures and acts that may not be essential to the invention and which are described as examples. Structure and acts described herein are replaceable by equivalents which perform the same function, even if the structure or acts are different, as known in the art. The embodiments described above are cited by way of example, and the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art. Therefore, the scope of the invention is limited only by the elements and limitations as used in the claims, wherein the terms “comprise,” “include,” “have” and their conjugates, shall mean, when used in the claims, “including but not necessarily limited to.”
26,242
11857804
DETAILED DESCRIPTION OF EMBODIMENTS In a general sense, embodiments disclosed herein relate to a mechanism by which a medical imaging schedule may be determined which is specific to a particular subject undergoing a particular type of treatment. An appropriate time at which to perform a first imaging scan after treatment has commenced may be determined. In some embodiments described herein, a schedule of further imaging scans may also be determined. Thus, the disclosure can be considered to relate to an imaging decision support (IDS) system. Such an IDS system may be used to determine (e.g. predict) an optimal time for image-based assessment of a response to treatment (e.g. immunotherapy). The IDS system may include a dynamic account of the immune system behavior in response to different types of therapy, and the immune system behavior is used to determine the best moment in time at which to capture an image. Some embodiments are described herein in the context of immunotherapy treatments. However, it will be appreciated that the methods and systems disclosed are applicable to other types of treatment. Referring to the drawings,FIG.1is a flowchart of an example of a method100for determining a medical imaging schedule for a subject receiving treatment at a target site. The term “subject” is intended to refer to a person or animal in respect of whom the method may be performed. For example, the subject may be a person suffering from cancer in one or more organs of their body. Thus, the “target site” is intended to refer to a site within the subject's body, such as a tumor or lesion, which is present as a result of the disease (e.g. the cancer). While the treatment may be applied to areas on or in the subject other than, or in addition to, the target site, it will be understood that the treatment is intended to ultimately improve the medical condition of the subject at the target site. For example, a treatment may be intended to kill cancer cells at a target site, thereby preventing the growth of a tumor at the target site. The method100comprises, at step102, obtaining first blood panel information from a first blood sample acquired from the subject prior to the treatment commencing. A blood panel is a blood test or group of tests performed on the blood of a subject from which various components of the blood may be measured. A blood panel involves the analysis of a blood sample which may, for example, be extracted from a vein of the subject. Thus, the blood panel information may be acquired from blood extracted from the subject at a location other than the target site. In this way, blood panel information can be considered to provide systemic information about the subject. In some examples, the blood panel information may include blood biomarkers (e.g. a measurable indicator of the presence of a particular state, such as a disease state or physiological state, of the subject). By acquiring the blood panel information from a blood sample obtained from the subject prior to the treatment commencing, it is possible to determine a baseline reference with which subsequent blood panels (i.e. blood panel information obtained from blood samples acquired after treatment has commenced) may be compared. The type of information extracted from the blood panel may vary on a case-by-case basis (e.g. based on the subject and the treatment being administered). In some examples, however, blood panel information may include data relating to relevant cells and proteins in the blood. For example, a leukocytes panel may provide data including measurements of myeloid-derived suppressor cells (MDSC), monocytes, macrophages, neutrophils eosinophils, basophils, lymphocytes, and/or platelets. An inflammatory panel may provide data relating to inflammatory proteins such as CRP, IL2, IL6, IL8, TNFalpha and/or a vascular endothelial growth factor (VEGF). A coagulation panel may provide data relating to hematocrit, PR, aPTT, Fibrinonogen, fibrin and/or thrombin. A blood panel may provide data including measurements of regulatory T-cells, measurements of eosinophils, data relating to an epidermal growth factor (EGF), measurements of interleukins, measurements of cytokines and data relating to electrolytes, such as urate, calcium, sodium, potassium and/or magnesium). In some examples, other information may be extracted from blood samples. In some examples, circulating tumor cells (CTC), circulating cell-free tumor DNA (ct-DNA), or circulating lysosomes may be measured and characterized. The blood panel information obtained at step102may, in some embodiments, be obtained from a series of blood samples obtained from the subject at different times prior to treatment commencing. The blood panel information may be stored in a database having been acquired from the blood sample previously. The step102of obtaining the blood panel information may involve retrieving the information or data from a database. At step104, the method100comprises obtaining initial imaging data acquired in respect of the target site prior to the treatment commencing. As will be appreciated by those skilled in the relevant field, various medical imaging technologies may be used to image the target site within a subject. For example, imaging modalities such as computed tomography (CT), positron emission tomography (PET), single photon emission computed tomography (SPECT), magnetic resonance (MR), ultrasound (US), or a hybrid imaging modality involving a combination of imaging techniques are amongst those techniques suitable for acquiring image data from the target site. In the field of immunotherapy, various other imaging techniques may be implemented. For example, in-vivo imaging of PD-L1 expression (i.e. imaging the visible development of the programmed death-ligand 1 receptor) may involve a combination of positron emission tomography and computed tomography (PET/CT) or a combination of single positron emission computed tomography and computed tomography (SPECT/CT), using a therapeutic anti-PD-L1 antibody that is labelled with a radioisotope (i.e. a radioactive tracer). In other examples, a radiopharmaceutical known as fludeoxyglucose (FDG) may be used as a radioactive tracer in PET/CT imaging. Other radioactive tracers may alternatively be used, such as 18F-fluorothymidine, 18F-fluorocholine or other markers for tumor proliferation or metabolism. Other antibodies may be used in conjunction with a radioisotope, such as anti-CD8, anti-CD4, or anti-PD-1. In another example, an immunotherapy treatment may require the in-vivo imaging of an anti-CD8 antibody (CD8 stands for “Cluster of Differentiation 8”, a transmembrane glycoprotein that serves as a co-receptor for the T-cell receptor). The anti-CD8 antibody may be coupled with a relatively short-lived radioactive tracer suitable for PET imaging. Another example may require imaging of an anti-PD 1 (i.e. programmed cell death) protein. An anti-PD 1 agent may be coupled with a medium-lived radioactive tracer suitable for PET imaging. Each radioactive tracer may have a different active life, depending on the half-life of the isotope used in the radioactive tracer. Thus, each radioactive tracer may be visible through imaging at a different duration after it has been injected into the subject. In some embodiments, metabolic states, hypoxia states and/or vascularization states may be derived from imaging data (e.g. from PET/CT imaging and/or from MR imaging) obtained prior to the treatment commencing. Information on PD-L1 tumor expression characterization and a prior CD8 tumor lymphocytes infiltration may be derived from anti-PD-L1 and anti-CD8 PET imaging. Different imaging modalities have different imaging sensitivities and, as such, some modalities may not be able to image subtle or small changes in the subject's tissue as a result of the treatment. Therefore, different imaging modalities may be appropriate for capturing images at different times after treatment has commenced. The initial imaging data obtained at step104may be acquired using any suitable imaging modality before the treatment has begun. Such initial imaging data may provide information regarding the target site (e.g. the location and size of a tumor or lesion), and may provide a baseline reference with which to compare imaging data acquired after the treatment has commenced. In some examples, the initial imaging data may be acquired, then stored in a database. The step104of obtaining the initial imaging data may involve retrieving the data from a database. The method100comprises, at step106, obtaining information regarding the treatment being received. The particular disease from which the subject is suffering may determine the nature of the treatment to be administered. Similarly, the treatment to be administered may determine the duration after the treatment has commenced when imaging the target site is appropriate (e.g. an approximate time when a radioactive tracer will be most visible). At step108, the method100comprises determining, based on at least the first blood panel information, the initial imaging data and the treatment information, a time at which to capture first imaging data in respect of the target site in order to assess a response to the treatment. The “first imaging data” to be captured is the first imaging data following the treatment commencing. In other words, while the “initial imaging data” refers to imaging data acquired prior to the treatment being administered, the “first imaging data” refers to imaging data to be acquired after the initial administration of the treatment. As noted above, different imaging modalities are suitable for imaging the subject at different times, for example depending on the response kinetics. Therefore, the determination of the image capturing time, made at step108, may take into account the imaging modality to be used to capture the first imaging data. Thus, in some embodiments, the determination made at step108may be based at least in part on the nature of an imaging modality to be used to capture the first imaging data. This may, for example, be incorporated into the initial imaging data used in the determining step108, particularly if the first post-treatment imaging uses the same imaging modality as the initial (pre-treatment) imaging. Any suitable imaging modality may be used. In some embodiments, however, the imaging modality to be used to capture the first imaging data may be selected from a group comprising: computed tomography (CT), positron emission tomography (PET), single photon emission computed tomography (SPECT), magnetic resonance (MR), ultrasound (US), and a hybrid imaging modality. The determination made at step108is based at least on the information and data obtained in steps102,104and106. In some embodiments, as discussed in greater detail below, other information and data may also be used in the determining step108. The first blood panel information may be used in the determining step108as an indication of the presence of various elements at the target site. For example, the detection of a particular component of blood from the blood panel may be extrapolated to determine an amount of that component at the target site. As noted above, the initial imaging data may be used in the determining step108as an indication of the location and/or size of a lesion or tumor, for example. The initial imaging data may also provide an indication of the extent to which cancer cells have spread beyond a particular location (e.g. the target site) prior to the treatment being administered. As noted above, the information regarding the treatment being received provides treatment-specific details, such as the optimum time to perform a first imaging scan for the particular treatment being administered. The determining108may provide an output in the form of a time or duration after the initial administration of the treatment (e.g. n minutes, n hours, n days, n weeks, and so on) at which first imaging data should be captured or acquired to view the effects of the treatment in an optimal manner. In some examples, an output may be provided in the form of a range of times within which the first imaging data should be captured or acquired. In its simplest sense, the step of determining108may be achieved using databases and/or look up tables. For example, a particular combination of information acquired from the first blood panel information, initial imaging data and treatment information may correspond to a particular duration after commencing treatment, or a particular range of times following the commencement of treatment, at which the first imaging data should be captured. In other examples, as discussed in greater detail below, the information and data obtained in steps102,104and106may be provided as inputs to a model for determining the time at which to capture the first imaging data. FIG.2is a flowchart of a further example of a method200for determining a medical imaging schedule for a subject receiving treatment at a target site. The method200may include steps of the method100discussed above. For example, the method200comprises the steps of obtaining first blood panel information (step102), obtaining initial imaging data (step104) and obtaining information regarding the treatment being received (step106). In some embodiments, the method200may further comprise, at step202, obtaining clinical information relating to the target site from a biopsy of the target site acquired prior to the treatment commencing. In embodiments in which clinical information is obtained, the method200may comprise, at step204, determining, based on at least the first blood panel information, the initial imaging data, the treatment information and the clinical information, a time at which to capture first imaging data in respect of the target site in order to assess a response to the treatment. In other words, determining the time at which to capture the first imaging data (step108) is further based on the clinical information. A biopsy of the target site may provide useful information regarding the target site (e.g. the tumor or lesion) which may be used to determine a time at which to capture the first imaging data. For example, tumor characteristics may be derived from a biopsy performed on a tumor at the target site. In some examples, immunohistochemical (IHC) data may be acquired from a biopsy using techniques that will be familiar to those skilled in the relevant field. Information on PD-L1 tumor expression characterization and a prior CD8 tumor lymphocytes infiltration may be obtained from biopsy IHC information. As noted above, the step of determining the time at which to capture the first imaging data may be performed using a model, with the information and data obtained at steps102,104and106provided as inputs into the model. In embodiments in which clinical information is obtained (e.g. at step202), the clinical information may also be provided as an input into a model. Thus, determining the time at which to capture the first imaging data comprises inputting the first blood panel information, the initial imaging data, the treatment information and the clinical information into a model describing an expected response to the treatment. In other words, the method200may, in some embodiments, comprise, at step206, determining the time at which to capture first imaging data by inputting the first blood panel information, the initial imaging data, the treatment information and the clinical information into a model describing an expected response to the treatment. The model may be referred to as a time-dependent immune response (TDIR) model. The model (e.g. a TDIR model) may be based on initial target site characteristics (e.g. tumor characteristics), an initial environment of the target site (e.g. an initial tumor environment) and initial diagnostic blood characterization (e.g. blood panel information from a blood sample acquired from the subject prior to the treatment commencing). When run or executed with the inputs mentioned above, the model provides as its output the time (e.g. an optimum time) at which to capture the first imaging data (i.e. the first imaging data after treatment has commenced). As noted above, response kinetics (i.e. physical changes resulting from the treatment) may be taken into account along with the sensitivity of the imaging modality to be used in order to determine a suitable (e.g. an optimal) time at which to capture the first imaging data. In examples where the treatment site is, or includes, a tumor, the model may include or incorporate a tumor growth model which predicts the growth of the tumor over time. The tumor growth model may be dynamic; that is to say, the tumor growth model may vary or be adapted over time as the rate of growth of the tumor changes, for example as a result of the treatment or of changes in the subject. The method200may further comprise, at step208, obtaining second blood panel information from a second blood sample acquired from the subject after the treatment has commenced. For example, after commencing the treatment, one or more further blood samples may be acquired from the subject (e.g. by performing one or more blood tests) and blood panel information may be obtained from each of the further blood samples. In some examples, further blood samples (e.g. the second and subsequent blood samples) may be acquired at regular intervals following the treatment commencing. In other examples, the further blood samples may be acquired at some other time-dependent frequency which may, for example, be defined by a set of rules. The rules defining when further blood samples should be acquired from the subject following the start of the treatment may be defined based, for example, on the type of treatment being administered. For example, the rules may specify that further blood samples are to be acquired after a duration t following the start of the treatment as this is when the beginning of a response is to be expected. The rules may be based on the treatment scheme being administered, patient characteristics, and one or more other factors. In some examples, the rules may be defined prior to treatment. However, the rules may be modified as treatment progresses, for example based on feedback from blood panel information or images/scans acquired after treatment has commenced. At step210, the method200may further comprise updating the determined time calculated using the model, based on the second blood panel information. Thus, parameters provided to the model used at step206to determine the time at which to capture the first imaging data may be adjusted or updated based on the second blood panel information acquired from second blood sample. The second blood panel information acquired from the second blood sample may, for example, indicate that the subject has responded unexpectedly (e.g. in a negative way) to the treatment and, therefore, it may be desirable to adjust the model so that the first imaging data is to be captured sooner. In this way, any negative effects may be identified quickly, so that any necessary remedial action or changes in the treatment regime may be taken. In embodiments in which a model is not used to determine the timing of the first imaging data acquisition, the method200may proceed from step204to step212, which comprises obtaining second blood panel information from a second blood sample acquired from the subject after the treatment has commenced (i.e. the same as step206). Following step212, however, the method200may comprise, at step214, updating the determined time at which to capture first imaging data based on the obtained second blood panel information. Thus, even though the timing may not be determined using a model, the determined time may be updated based on the second blood panel information obtained from the second blood sample. It is noted that, in some embodiments, the determined time may be updated (step214) without the first-determined imaging schedule (i.e. the imaging time determined at step108) having been implemented. In other words, a second blood sample acquired before the determined imaging schedule is implemented may reveal information that prompts the determination to be made again to determine a new time at which to capture the first imaging data. As noted briefly above, blood panel information may comprise information relating to one or more of blood biomarkers, cytokines, leukocyte panel information, CTCs, and ct-DNA. In an example in which a model is used to determine timings for acquiring images after the treatment has begun, such a model may be used to determine an appropriate time (e.g. an optimal time) to acquire PD1/CD8 immuno-PET images in order to determine whether CD8 T-cells are able to infiltrate a tumor at a target site upon PD-L1 inhibition. In addition to determining an appropriate time for the first post-treatment image acquisition, an imaging schedule may be determined, for example using the model. Thus, the method may determine or suggest times at which to acquire further images in order to monitor the effectiveness of the treatment and/or the progression of the disease. For example, a model may provide a suggestion to acquire an image to monitor PD1 expression, using a suitable radioactive tracer. Similarly, the model may provide an output suggesting that an anti-PD-L1 PET image should be acquired to confirm that a therapeutic anti-PD-L1 agent has adequately been able to block PD-L1 over-expression at the target site (i.e. on the tumor). In addition to determining a time at which to capture first imaging data (e.g. a time at which to perform a first imaging scan) following the beginning of the administration of the treatment, the method according to some embodiments may determine a time or times at which to capture additional imaging data. For example, a schedule of imaging events may be determined. Referring again toFIG.2, the method200may comprise, at step216, determining, based at least on the obtained blood panel information, the obtained initial imaging data and the obtained treatment information, a time at which to capture second imaging data in respect of the target site in order to assess the response to the treatment. Thus, in some embodiments, the method may also determine times at which to capture third, fourth, fifth (and so on) imaging data. An imaging schedule determined at step216may be revised or updated based, for example, on blood panel information obtained subsequent to the first blood panel information. For example, if the second blood panel information reveals that an unexpected treatment response has occurred, then the schedule (e.g. the determined time at which to capture second imaging data) may be revised by the method200. In some embodiments, an imaging schedule which includes appropriate (e.g. optimum) times for acquiring third and/or fourth (or subsequent) imaging data may be revised or updated based on information acquired in the second imaging data. In general, each newly-acquired imaging data may affect the imaging schedule going forward. For example, a set of acquired imaging data may show a particular response which would warrant bringing forward the next scheduled scan. As noted above, while embodiments are described in the context of medical treatments in general, according to some embodiments, the treatment may comprise an immunotherapy treatment, and the target site may comprise a tumor. According to some embodiments, as noted above, immunohistochemical (IHC) information may be used to derive details of tumor expression and/or tumor infiltration by particular entities associated with the treatment. Thus, the method200may, in some embodiments, further comprise obtaining immunohistochemical information relating to the target site. The target site may, for example, comprise a tumor. However, for some target sites (e.g. tumors or lesions), IHC information may not be available, or it may not be possible to obtain such IHC information from a biopsy of the target site. In such cases, equivalent or similar information may be obtainable from IHC information from an alternative site and from imaging data acquired in relation to the alternative site and the target site. In some embodiments, the determined time at which to capture first imaging data may correspond approximately to the time at which an initial response to the treatment may be observable. In other words, the time determined at step108may correspond to the earliest time at which any evidence of a treatment response might be expected in view of the sensitivity of the imaging modality in question. Capturing an image before this time is unlikely to provide much, if any, benefit as no response to the treatment is likely to be visible in an image. However, capturing an image, or multiple images around the time, or soon after the time, of an initial response to the treatment being visible may be particularly useful as any temporary peak in treatment response (e.g. corresponding to a temporary reduction in growth of a tumor) followed by a reduction in treatment response may be detectable if appropriate imaging is performed around this time. In some embodiments, any of the obtained information or data (e.g. the information and data obtained at steps102,104,106,202,208and212) and any of the determined times (e.g. the times determined at steps108and216) may be delivered for presentation to a user (e.g. a medical professional) or presented to the user. For example, data may be presented on a display associated with workstation, a computer terminal, or some other computing device or mobile device. An example of a method according to one embodiment will now be discussed with reference toFIG.3. The method discussed with reference toFIG.3may be performed with respect to a subject undergoing treatment (e.g. immunotherapy treatment) at a target site (e.g. a cancerous tumor).FIG.3is a block diagram showing the information pathways between various elements. In this example, blood panel information302, clinical information304from a biopsy and initial imaging data306are provided as inputs to a model. The blood panel information302may, for example, comprise the first blood panel information obtained at step102of the method100, which may include blood indicators or biomarkers. The clinical information304may comprise information from a biopsy of the target site acquired prior to treatment commencing, as obtained at step202of the method200. The initial imaging data306may comprise the data obtained at step104of the method100. The model308may comprise a time-dependent immune response (TDIR) model as discussed above. The model308may further be provided with information regarding the treatment being provided to the subject. Based on the various inputs, the model308outputs an imaging schedule310. The image schedule310may include one or more times at which to capture imaging data in respect of the target site in order to assess the response to the treatment. Following the determination of an initial imaging, additional blood panel information312may be acquired, for example through additional blood samples acquired from the subject. The additional blood panel information312may be provided as an input to the model308. Based on the additional blood panel information312, an updated imaging schedule314is generated. One or more further blood samples may be acquired, as needed, and the model308may be used to generate further updated imaging schedules based on blood panel information obtained from the further blood samples. The additional blood samples312may comprise samples obtained at step212of the method200. Similarly, the updated imaging schedule314may comprise a schedule updated at step214of the method200. In some embodiments, the additional blood panel information312may indicate that the model308should be run again with new, revised input parameters. For example, a significant difference between the (initial) blood panel information302and the additional blood panel information312may be evident. In such a case, parameters input to the model308may be adjusted or revised so that a more appropriate output from the model may be achieved. Blood panel information obtained from the first blood sample and from subsequent blood samples (e.g. the additional blood panel information312) may be used to provide systemic information relating to the subject. For example, the blood panel information may provide details of the overall immune system of the subject. By acquiring additional blood samples from the subject after treatment has begun, blood panel information from those additional blood samples may be used to determine whether the overall immune system is responding to the treatment. An example of an indicator of the overall system response is the balance of CD8 and CD4 cells. For example, if, from additional blood panel information312, it is apparent that the CD8/CD4 balance has become unfavorable (e.g. by an increase in the number of CD4 cells in relation to the number of CD8 cells), then the model308may output a revised imaging schedule which recommends that an imaging scan should take place earlier than initially scheduled. In embodiments in which a model is used to determine the time at which to capture first imaging data, the model may include various components, each of which focuses on a different aspect relevant to the time at which the imaging data is to be captured. As discussed above, various inputs may be provided to the model. A first input may involve tumor spatial characterization which may, for example, be obtained or extracted from the initial imaging data. The spatial characterization of the tumor may, for example, be obtained from CT scans, FDG PET scans, or MR scans. A second input may involve tumor cellular characterization which may, for example, be obtained or extracted from the clinical information from a biopsy of the target site. The cellular characterization of the tumor may, for example, be obtained from IHC measurements or immune-PET scans. A third input may involve blood panels which may, for example, be obtained from blood samples acquired from the subject prior to the treatment commencing. As discussed above, blood panel information may include data relating to leukocytes, inflammatory proteins, coagulation information and/or electrolytes. A fourth input may involve information relating to the type of treatment or therapy to be administered to a subject. Within the model, a first component may consider tumor proliferation; a second component may consider tumor killing by effector cells; a third component may consider recruitment of effector cells; and a fourth component may consider expansion of effector cells. One or more of the model components may be combined in order to generate an output of the model. A first output of the model may comprise a measurement of the volume of the tumor, or an estimated or predicted tumor volume, based on the inputs provided. The second output the model may comprise the time at which the first imaging data (after treatment has commenced) should be acquired or captured. The time at which to capture the first imaging data may also depend on the imaging modality to be used, and this may be referred to as an image response function. The model may take the imaging modality into consideration when generating its outputs. The model may, for example, consider noise, blurring and/or spatial resolution in the imaging modality to be used. According to a further aspect, embodiments relate to a system for performing the methods disclosed herein.FIG.4is a simplified schematic of a system400for determining a medical imaging schedule for a subject receiving treatment at a target site. The system400comprises a memory402comprising instruction data representing a set of instructions. The system400also comprises a processor404configured to communicate with the memory and to execute the set of instructions. The set of instructions, when executed by the processor404, cause the processor to obtain first blood panel information acquired from a first blood sample taken from the subject prior to the treatment commencing. The memory may, therefore, comprise first blood panel information obtaining instructions406. The set of instructions, when executed by the processor404, further cause the processor to obtain initial imaging data acquired in respect of the target site prior to the treatment commencing. The memory may, therefore, comprise initial imaging data obtaining instructions408. The set of instructions, when executed by the processor404, further cause the processor to obtain information regarding the treatment being received. The memory may, therefore, comprise treatment information obtaining instructions410. The set of instructions, when executed by the processor404, cause the processor to determine, based on at least the first blood panel information, the initial imaging data and the treatment information, a time at which to capture first imaging data in respect of the target site in order to assess a response to the treatment. The memory may, therefore, comprise time determination instructions412. According to some embodiments, the set of instructions, when executed by the processor404, may cause the processor to obtain second blood panel information from a second blood sample acquired from the subject after the treatment has commenced. The set of instructions, when executed by the processor404, may further cause the processor to update the determined time at which to capture first imaging data based on the obtained second blood panel information According to some embodiments, the set of instructions, when executed by the processor404, may cause the processor to obtain clinical information relating to the target site from a biopsy of the target site acquired prior to the treatment commencing. The set of instructions, when executed by the processor404, may further cause the processor to input the first blood panel information, the initial imaging data and the clinical information into a model describing an expected response to the treatment According to some embodiments, the set of instructions, when executed by the processor404, may cause the processor to determine, based on at least the obtained blood panel information and the obtained initial imaging data, a schedule for capturing further imaging data in respect of the target site in order to assess the response to the treatment. The system400may, for example, comprise a workstation, a computer terminal or some other computing device or mobile device having suitable processing functionality. According to a further aspect, embodiments relate to a computer program product.FIG.5is simplified schematic of a computer-readable medium and a processor. According to some embodiments, a computer program product comprises a non-transitory computer readable medium502, the computer readable medium having computer readable code embodied therein, the computer readable code being configured such that, on execution by a suitable computer or processor504, the computer or processor is caused to perform the methods disclosed herein. In the context of this non-transitory computer readable medium and for the execution of the computer readable code, when the computer or processor is caused to performs a step of obtaining information or data, this means the respective information of data is retrieved from a data storage. The processor404,504can comprise one or more processors, processing units, multi-core processors or modules that are configured or programmed to control the system400in the manner described herein. In particular implementations, the processor404,504can comprise a plurality of software and/or hardware modules that are each configured to perform, or are for performing, individual or multiple steps of the method described herein. The term “module”, as used herein is intended to include a hardware component, such as a processor or a component of a processor configured to perform a particular function, or a software component, such as a set of instruction data that has a particular function when executed by a processor. It will be appreciated that the embodiments of the invention also apply to computer programs, particularly computer programs on or in a carrier, adapted to put the invention into practice. The program may be in the form of a source code, an object code, a code intermediate source and an object code such as in a partially compiled form, or in any other form suitable for use in the implementation of the method according to embodiments of the invention. It will also be appreciated that such a program may have many different architectural designs. For example, a program code implementing the functionality of the method or system according to the invention may be sub-divided into one or more sub-routines. Many different ways of distributing the functionality among these sub-routines will be apparent to the skilled person. The sub-routines may be stored together in one executable file to form a self-contained program. Such an executable file may comprise computer-executable instructions, for example, processor instructions and/or interpreter instructions (e.g. Java interpreter instructions). Alternatively, one or more or all of the sub-routines may be stored in at least one external library file and linked with a main program either statically or dynamically, e.g. at run-time. The main program contains at least one call to at least one of the sub-routines. The sub-routines may also comprise function calls to each other. An embodiment relating to a computer program product comprises computer-executable instructions corresponding to each processing stage of at least one of the methods set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically. Another embodiment relating to a computer program product comprises computer-executable instructions corresponding to each means of at least one of the systems and/or products set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically. The carrier of a computer program may be any entity or device capable of carrying the program. For example, the carrier may include a data storage, such as a ROM, for example, a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example, a hard disk. Furthermore, the carrier may be a transmissible carrier such as an electric or optical signal, which may be conveyed via electric or optical cable or by radio or other means. When the program is embodied in such a signal, the carrier may be constituted by such a cable or other device or means. Alternatively, the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted to perform, or used in the performance of, the relevant method. Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.
40,250
11857805
DETAILED DESCRIPTION Reference will now be made in detail to several embodiments. While the subject matter will be described in conjunction with the alternative embodiments, it will be understood that they are not intended to limit the claimed subject matter to these embodiments. On the contrary, the claimed subject matter is intended to cover alternative, modifications, and equivalents, which may be included within the spirit and scope of the claimed subject matter as defined by the appended claims. Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the claimed subject matter. However, it will be recognized by one skilled in the art that embodiments may be practiced without these specific details or with equivalents thereof. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects and features of the subject matter. Portions of the detailed description that follow are presented and discussed in terms of a method. Although steps and sequencing thereof are disclosed in a figure herein (e.g.,FIGS.9and10) describing the operations of this method, such steps and sequencing are exemplary. Embodiments are well-suited to performing various other steps or variations of the steps recited in the flowchart of the figure herein, and in a sequence other than that depicted and described herein. Some portions of the detailed description are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits that can be performed on computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer-executed step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout, discussions utilizing terms such as “accessing,” “displaying,” “writing,” “including,” “storing,” “rendering,” “transmitting,” “instructing,” “associating,” “identifying,” “capturing,” “controlling,” “encoding,” “decoding,” “monitoring,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. Increased Beam Output and Dynamic Field Shaping Using a 2D Periodic Electron Beam Path Embodiments of the present invention describe systems and methods for providing radiotherapy treatment using an electron emission device that produces an electron beam focused on a target (e.g., a tungsten plate) to generate a high-yield x-ray output with improved field shaping. The high-yield x-ray output and improved field shaping minimizes the radiation received by healthy tissue, increases the dosage rate/throughput of the treatment, and increases the useful lifetime of the tungsten target. Embodiments according to the present invention use a modified electron beam spatial distribution, such as a 2D periodic beam distribution, to lower the x-ray target temperature compared to typical compact beam spatial distribution. The temperature of the target is reduced due to the 2D periodic path of the electron beam versus a compact beam profile, e.g., the heat generated from the electron beam is spread out within the target in accordance with the beam path. As a result, the electron beam output can be increased without sacrificing x-ray target life span. The use of a 2D periodic electron beam distribution allows a much colder target functioning regime such that more dosage can be applied in a short period of time compared to existing techniques. Further, the useful life of the tungsten target is increased. According some embodiments of the present invention, the electron beam is scanned in one or more 2D periodic paths defined by one or more predetermined elementary shapes, such as Lissajous paths or spherical harmonic based shapes (e.g., s-wave, p-wave, d-wave, and so on), in order to increase the output and shape the electron beam profile. The 2D periodic path can be rapidly dynamically altered. The elementary shapes can constitute a new basis set, as compared to the Cartesian-style basis set used for multileaf collimators (MLCs). By dynamically shaping the electron field at the target, it is possible to generate beam fluence appropriate for a tumor much faster than what an MLC can do. The MLC can still be used for leakage blocking at the edge of a field instead of primary beam shaping. In some embodiments, the electron beam configuration is changed using external magnetic fields generated by specially designed coils. In other embodiments, hollow cathodes that generate 2D periodic beams are used, and the linear accelerator is designed such that the 2D periodic distribution is preserved along the accelerator. In yet other embodiments, existing steering coils are used to perform a scanning circular motion of the beam with a frequency higher than 200 kHz to ensure that one pulse gets smeared on the target surface in one revolution. With regard toFIG.1, an exemplary radiotherapy system100for generating a 2D periodic electron beam to the target is depicted according to embodiments of the present invention. An electron emission device105(e.g., an electron gun assembly205) generates an electron beam, and a waveguide110transports the electron beam to a focusing coil115to focus the electron beam using a magnetic field. According to some embodiments, the electron emission device105generates an electron beam at approximately 30 kV, for example. The electron beam may be accelerated by a linear accelerator106to approximately 200-300 MeV in accordance with well-known techniques and equipment. A 2D periodic distribution of x-rays is achieved, in one embodiment, using a pair of magnetic steering coils120to deflect the electron beam in accordance with a predetermined path on the x-ray target surface125. The x-ray target surface125may be a high-yield target surface in the form of a tungsten plate or wedge, for example. As described in more detail below, the pair of magnetic steering coils120can be dynamically controlled to deflect the electron beam along a 2D periodic path on the x-ray target surface125. The use of a 2D periodic electron beam distribution allows a much colder target functioning regime by dynamically moving the electron beam over a wider surface area versus a concentrated electron beam distribution. Because of this, the target output field130can be increased substantially without sacrificing the life span of the x-ray target surface125. Dynamic electron beam scanning may be used to achieve a 2D periodic electron beam spatial distribution, and can also be used for dynamic field shaping by changing the scanning path using generalized curves. The pair of magnetic steering coils120may include one or more pairs of magnetic steering coils that dynamically produce magnetic fields in perpendicular directions for steering the electron beam on the x-ray target surface125. The magnetic field produced by the pair of magnetic steering coils120may be controlled by the computer system135(e.g., the computer system1100depicted inFIG.11), for example, by adjusting a voltage and/or current across the pair of magnetic steering coils120. The 2D periodic electron beam distribution may be generated by varying a voltage or current applied to the pair of magnetic steering coils120, in combination, to produce predetermined elementary shapes, e.g., Lissajous paths or spherical harmonic based shapes (e.g., s-wave, p-wave, d-wave, and so on), or a linear combination thereof, in order to increase the output and shape the electron beam profile. The scanned 2D periodic electron beam path on x-ray target surface125causes to be generated an x-ray output field or distribution130. Advantageously, this distribution130can be dynamically altered by corresponding dynamic adjustments of the pair of magnetic steering coils120. According to some alternative embodiments, the x-ray target surface125is not used and the radiotherapy system100is used to perform electron therapy. In the example ofFIG.2, an exemplary radiotherapy system200for generating a 2D periodic electron beam to produce x-rays shaped using a beam shaping device (e.g., MLC220) is depicted according to embodiments of the present invention. An electron gun assembly205generates an electron beam and a 2D periodic distribution of x-rays is achieved using a pair of magnetic steering coils210that generate opposed B-fields to deflect the electron beam on a 2D periodic path on the x-ray target surface215. The use of a 2D periodic electron beam distribution allows a much colder target functioning regime such that more dosage can be applied in a shorter period of time compared to existing techniques. The MLC220may be used to further shape the x-ray distribution output from the x-ray target surface215. In this fashion, the MLC220may be used for leakage blocking at the edge of the output field (instead of primary beam shaping). In this embodiment, the shaped field output225is shaped by the combination of the pair of magnetic steering coils210and the MLC220, and is delivered to the target region of patient230, for example, according to a treatment plan. In this embodiment, the dose application to the patient230can be altered by dynamically altering the signals to the pair of magnetic steering coils210as well as reconfiguration of the MLC220. In effect, the MLC220can provide course shaping, and the pair of magnetic steering coils210can provide fine shaping, etc., or vice-versa. In the embodiment ofFIG.3, an exemplary radiotherapy system300for generating a shaped x-ray distribution using: 1) a 2D periodic electron beam path on the x-ray target surface315and 2) an MLC320in combination with blocks or wedges335(e.g., lead blocks or Cerrobend blocks), is depicted according to embodiments of the present invention. An electron gun assembly305generates an electron beam and a 2D periodic distribution of x-rays is achieved using a pair of magnetic steering coils310to move the electron beam on a circular path on the x-ray target surface315. The wedges335may be used to perform field shaping in addition to the MLC320. The resultant shaped beam output325shaped by the pair of magnetic steering coils310, the wedges335, and the MLC320is delivered to the target region of patient330, for example, according to a treatment plan. With regard toFIG.4, an exemplary patient imaging session400for generating a patient treatment plan (e.g., a radiotherapy treatment plan) using a 2D periodic beam path is depicted according to embodiments of the present invention. The patient405is positioned at a center, and radiation is emitted over a computerized tomography (CT) scan configured to combine a series of x-ray exposures410performed over different angles (e.g., Θ1-Θ8) around the patient405. A computer system135controls the radiotherapy system (e.g., ofFIGS.1-3) to radiate the patient at the different positions. FIG.5depicts an exemplary 2D periodic electron beam path510generated using a pair of magnetic steering coils as described herein according to embodiments of the present invention. The electron beam path510is scanned on a target515that generates an x-ray field for providing radiotherapy treatment. In this example, the 2D periodic beam path is roughly circular or annular. FIG.6depicts an exemplary elliptical electron beam path610generated using a pair of magnetic steering coils as described herein according to embodiments of the present invention. The electron beam path610is scanned on a target605that generates an x-ray field for providing radiotherapy treatment. FIG.7depicts an exemplary figure-eight electron beam path710generated using a pair of magnetic steering coils as described herein according to embodiments of the present invention. The electron beam path710is scanned on a target705that generates an x-ray field for providing radiotherapy treatment. According to some embodiments, electronic signals or commands are used to control a radiotherapy device for producing a corresponding beam path based on a patient's treatment plan and one or more predetermined elementary shapes (e.g., a circle, an ellipse, a figure-eight, a clover leaf, etc.). For example, multiple shapes may be selected, and each shape may be assigned a specific weight that indicates the desired beam intensity for the corresponding shape. In one example, an electronic (e.g., digital) signal or command is sent from a power management or control unit to a pair of steering coils to vary the current or voltage over the steering coils to produce a desired shape. Moving the electron beam with respect to the patient in this way reduces target heating and increases the output of the radiotherapy system. During operation, a control signal, such as an arbitrary sine wave, may be used to trigger the radiotherapy system to generate an electron beam periodically. According to some embodiments, the electronic signals or commands are used to control a radiotherapy device for producing arbitrary 2D shapes (e.g., a convex hull) using linear combinations of basic shape functions (e.g., a circle, an ellipse, a figure-eight, a clover leaf, etc.). Moreover, tiling two-dimensional projections of a treatment volume may be optimized for Rapid Arc type treatments that rapidly deliver precise intensity modulated radiation therapy (IMRT). As depicted inFIG.8, according to some embodiments, a computer system805generates or accesses a patient treatment plan for providing radiotherapy using a radiotherapy treatment system800. The patient treatment plan may include one or more pre-defined shapes associated with a treatment weight or magnitude. Based on the treatment plan (e.g., the shapes and weights), the computer system805sends one or more instructions to a power unit810of the radiotherapy treatment system800for controlling steering coils815of the radiotherapy treatment system800to generate electron beam paths according to the patient treatment plan. The power unit810may cause the steering coils815to shape the electron beam to produce the beam paths by varying a voltage or current of the control signals sent to the steering coils810as supplied by the power unit810. The pre-shaped output beam is applied to a target820(e.g., a tungsten plate or wedge) that produces high-yield x-rays, and the resultant output x-ray distribution825is applied to a patient for performing radiotherapy on a target region thereof. With regard toFIG.9, an exemplary sequence of computer implemented steps900for automatically generating a 2D periodic beam distribution to produce a treatment volume of x-rays using a radiotherapy system is depicted according to embodiments of the present invention. At step905, an electron beam is generated and emitted from an electron emission device, and the electron beam is steered onto a predetermined target at step910, for example, according to a treatment plan. At step915, the electron beam is dynamically scanned across the target in a 2D periodic path to produce a 2D periodic distribution of x-rays. At step920, a resultant treatment volume of the x-rays is produced by shaping the 2D periodic distribution of x-rays using a beam shaping device. The resultant treatment volume generated at step920can provide higher dosages in a short period of time compared to existing techniques, and can extend the lifetime of the x-ray target by distributing heat across the target surface. With regard toFIG.10, an exemplary sequence of computer implemented steps1000for automatically producing a 2D periodic distribution of x-rays using a radiotherapy system is depicted according to embodiments of the present invention. At step1005, one or more shapes (e.g., spherical harmonic shapes) and corresponding weights for treating a target region are determined using a computer system. The target region may be determined according to a treatment plan generated based on a computed tomography (CT) scan, for example. At step1010, one or more control signals representing the shapes and weights are transmitted from the computer system to a power management unit. Thereafter, at step1015, the power management unit dynamically adjusts a current or voltage applied to the steering coils responsive to the control signals to produce x-rays (e.g., a 2D periodic distribution of x-rays) corresponding to the shapes and the weights. At step1020, a resultant treatment volume of the x-rays is generated by shaping the distribution of x-rays using a beam shaping device. The resultant treatment volume generated by step1020can provide higher dosages in a shorter period of time compared to existing techniques, and can extend the lifetime of the x-ray target by distributing heat across the target surface. Advantageously, embodiments according to the invention can be implemented without moving parts (e.g., without moving the x-ray target). However, a 2D periodic beam distribution can be achieved by moving the x-ray target with respect to the electron beam. Moving the electron beam with respect to the target reduces target heating and increases beam output. FIG.11shows a block diagram of an example of a computing system1100upon which one or more various embodiments described herein may be implemented in accordance with various embodiments of the present disclosure. The computer system1100may include a cloud-based computer system, a local computer system, or a hybrid computer system that includes both local and remote devices for providing radiotherapy using a 2D periodic distribution of x-rays. In a basic configuration, the computer system1100includes at least one processing unit1102and memory1104. This basic configuration is illustrated inFIG.11by the dashed line1106. The computer system1100may also have additional features and/or functionality. For example, the computer system1100may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated inFIG.11by removable storage1108and non-removable storage1120. The computer system1100may also contain communications connection(s)1122that allow the device to communicate with other devices, e.g., in a networked environment using logical connections to one or more remote computers. Furthermore, the computer system1100may also include input device(s)1124such as, but not limited to, a voice input device, touch input device, keyboard, mouse, pen, touch input display device, etc. In addition, the computer system1100may also include output device(s)1126such as, but not limited to, a display device, speakers, printer, etc. In the example ofFIG.11, the memory1104includes computer-readable instructions, data structures, program modules, and the like associated with one or more various embodiments1150in accordance with the present disclosure. However, the embodiment(s)1150may instead reside in any one of the computer storage media used by the computer system1100, or may be distributed over some combination of the computer storage media, or may be distributed over some combination of networked computers, but is not limited to such. The computer system1100may be configured to generate or access a radiotherapy treatment plan and to control one or more steering coils to produce beam paths according to the radiotherapy treatment plan. It is noted that the computer system1100may not include all of the elements illustrated byFIG.11. Moreover, the computer system1100can be implemented to include one or more elements not illustrated byFIG.11. It is pointed out that the computer system1100can be utilized or implemented in any manner similar to that described and/or shown by the present disclosure, but is not limited to such. Embodiments of the present invention are thus described. While the present invention has been described in particular embodiments, it should be appreciated that the present invention should not be construed as limited by such embodiments, but rather construed according to the following claims.
21,391
11857806
DESCRIPTION OF EXEMPLARY EMBODIMENTS OF THE INVENTION Optically stimulated luminescence (OSL) is one of the main types of passive dosimetry. Other types include thermo-luminescence, film, and track-etch dosimetry. Optical stimulation represents a way to retrieve stored energy from materials. OSL materials emit light when stimulated by photons of light; that is, OSL materials store energy but give off light when they are optically stimulated. In 2010 Ross S. Fontenot and research colleagues began investigating an organic compound known as europium tetrakis dibenzoylmethide triethylammonium (EuD4TEA). See W. A. Hollerman, R. S. Fontenot, K. N. Bhat, M. D. Aggarwal, C. J. Guidry, and K. M. Nguyen, “Comparison of triboluminescent emission yields for 27 luminescent materials,”Optical Materials34(9), 1517-1521 (2012), hereby incorporated herein by reference; see also, R. S. Fontenot, K. N. Bhat, W. A. Hollerman, and M. D. Aggarwal, “Triboluminescent materials for smart sensors,”Materials Today,14, 292-293 (2011), hereby incorporated herein by reference. Fontenot et al. found that triboluminescence emitted from EuD4TEA is bright enough to be seen in daylight, and has 206% of the emission yield of ZnS:Mn when subjected to low energy impacts. As distinguished from OSL materials, most materials cannot store energy and in fact are damaged by ionizing radiation. It has been found that both proton radiation and heavier ion radiation reduce the luminescence emitted from both inorganic and organic materials. In 1951, Birks and Black showed experimentally that the fluorescence efficiency of anthracene bombarded by alphas varies with the total fluence as II0=11+NN1/2,(1) where I, I0, N, and N1/2represent the fluorescence emission intensity, initial fluorescence emission intensity, total incident particle fluence, and the half brightness fluence, respectively. J. B. Birks, “Scintillations from Organic Crystals: Specific Fluorescence and Relative Response to Different Radiations,”Proc. Phys. Soc. Sect. A64,874 (1951). The units of I and I0are related to the number of fluorescence photons interacting with the detector. When plotting the reciprocal of the light ratio (I0/I) versus proton fluence, the resulting curve is linear with the slope equal to the inverse of N1/2. The corresponding curve intercept is unity. Schulman observed an effect similar to Equation (1) when organic anthracene was exposed to gamma irradiation. J. H. Schulman, H. W. Etzel, and J. G. Allard, “Application of Luminescence Changes in Organic Solids to Dosimetry,”J. Appl. Phys.28, 792-795 (1957). Black later observed no efficiency degradation when the phosphor was exposed to 40 keV electrons, since they only cause ionization damage with no atomic displacements. F. A. Black, “The Decay in Fluorescence Efficiency of Organic Materials on Irradiation by Particles and Photons,”Philos. Mag. Ser.7 44, 263-267 (1953). Northrop and Simpson found that the fluorescence efficiency deteriorated in a similar fashion as was measured in previous measurements for organic phosphors. D. C. Northrop and O. Simpson, “Electronic Properties of Aromatic Hydrocarbons II: Fluorescence Transfer in Solid Solutions.” Proc. R. Soc. London.Ser. A. Math. Phys. Sci.234, 136-149 (1956). Broser and Kallmann developed a similar relationship to Equation (1) for inorganic phosphors irradiated using alpha particles. These results indicate that radiation produced quenching centers compete with emission centers for absorbed energy. Von Immanuel Broser und Hartmut Kallmann, “Uber die Anregung von Leuchtstoffen durch schnelle Korpuskularteilchen I (Eine neue Methode zur Registrierung und Energiemessung schwerer geladener Teilchen),” Aus dem Kaiser-Wilhelm-Institut fiir physikalische Chemie und Elektrochemie, Berlin-Dahlem, Z. Naturforsch, 2A, 439 (1950). For the past decade researchers have been measuring N1/2for several single crystal polycrystalline paint and pressed tablet forms of selected rare earth phosphors prepared at ambient temperature.FIG.1is a tabular representation of selected proton N1/2data for several phosphor materials and forms. As shown inFIG.1, the resulting N1/2values vary in the range between 2.83×1010to 4.03×1014mm−2. The term “PC Paint” refers to PPMS paint with polycrystalline phosphor. The term “Thick Paint” refers to thick PPMS paint with larger grained (size shown) polycrystalline phosphor. The term “Single Crystal” refers to a single slice of the given phosphor crystal. These phosphors emitted light by radioluminescence that were excited by using a 1 or 3 MeV proton beam from a small electrostatic accelerator. The data shown inFIG.1is taken from the following references: W. A. Hollerman, S. W. Allison, S. M. Goedeke, P. Boudreaux, R. Guidry, and E. Gates, “Comparison of Fluorescence Properties for Single Crystal and Polycrystalline YAG:Ce,”Nucl. Sci. IEEE Trans.50, 754-757 (2003); W. A. Hollerman, N. P. Bergeron, F. N. Womack, S. M. Goedeke, and S. W. Allison, “Changes in Half Brightness Dose Due to Preparation Pressure for YAG:Ce,”Nucl. Sci. IEEE Trans.51, 1080-1083 (2004); W. A. Hollerman, J. H. Fisher, L. R. Holland, and J. B. Czirr, “Spectroscopic Analysis of Proton-Induced Fluorescence from Yttrium Orthosilicate,”Nucl. Sci. IEEE Trans.40, 1355-1358 (1993); W. A. Hollerman, J. H. Fisher, D. Ila, G. M. Jenkins, and L. R. Holland, “Proton-Induced Fluorescence Properties of Terbium Gallium Garnet,”J. Mater. Res.10, 1861-1863 (1995); W. A. Hollerman, S. M. Goedeke, N. P. Bergeron, R. J. Moore, S. W. Allison, and L. A. Lewis, “Emission Spectra from ZnS:Mn due to Low Velocity Impacts,” inPhotonicsSp.Environ. X, edited by E. W. Taylor (SPIE, San Diego, CA, USA, 2005), 58970E-10; W. A. Hollerman, S. M. Goedeke, N. P. Bergeron, C. I. Muntele, S. W. Allison, and D. Ila, “Effects of Proton Irradiation on Triboluminescent Materials such as ZnS:Mn,”Nucl. Instruments Methods Phys. Res. Sect. B Beam Interact. with Mater. Atoms241, 578-582 (2005); W. A. Hollerman, S. M. Goedeke, R. J. Moore, L. A. Boatner, S. W. Allison, and R. S. Fontenot, “Unusual Fluorescence Emission Characteristics from Europium-Doped Lead Phosphate Glass Caused by 3 MeV Proton Irradiation,” in 2007IEEE Nucl. Sci. Symp. (IEEE, Honolulu, H I, 2007), 1368-1372; F. N. Womack, S. M. Goedeke, N. P. Bergeron, W. A. Hollerman, and S. W. Allison, “Measurement of Triboluminescence and Proton Half Brightness Dose for ZnS:Mn,”IEEE Trans. Nucl. Sci.51, 1737-1741 (2004); W. A. Hollerman, R. S. Fontenot, S. Williams, and J. Miller, “Using Luminescent Materials as the Active Element for Radiation Sensors,”Proceedings SPIE9838, inSensors and Systems for Space Applications IX, edited by K. D. Pham and G. Chen (SPIE, Baltimore, MD, USA, 19 Apr. 2016), 98380Z; W. A. Hollerman, G. A. Glass, and S. A. Allison, “Survey of Recent Research Results for New Fluor Materials, “MRS Online Proc. Libr.560, 335-341 (1999); Stephen A. Williams,Half Brightness Measurements of Candidate Radiation Sensors, Master's Thesis, University of Louisiana at Lafayette, August 2016. Still referring toFIG.1, the binder used for most of the polycrystalline samples were (phenyl methyl) siloxane (PPMS) and PMMA. Samples labeled “PC Paint” had phosphor grain sizes that were measured to be less than 10 μm. These materials were applied to an aluminum substrate using a standard airbrush with a paint containing approximately 70% PPMS and 30% phosphor powder; this formulation was found to give the toughest and most wear resistant paint. Samples labeled “Thick Paint” also used PPMS as a binder. However, these paints were too thick and the phosphor grains were too large to be sprayed using the airbrush; therefore, these paint materials were spread on an aluminum substrate in much the same way that jam is applied to bread. Small phosphor crystal slices were mounted directly to the sample holder for measurement. Proton beam current was kept small to minimize electrical discharge. With one exception, the Birks and Black relation describes the reduction in fluorescence yield for all inorganic materials tested between 1990 and the present. In that exceptional case, the emitted radioluminescence yield from a lead phosphate glass doped with 6 wt. % europium sample increased linearly to a maximum tested fluence of about 1015mm−2. W. A. Hollerman, S. M. Goedeke, R. J. Moore, L. A. Boatner, S. W. Allison, and R. S. Fontenot, “Unusual Fluorescence Emission Characteristics from Europium-Doped Lead Phosphate Glass Caused by 3 MeV Proton Irradiation,”IEEE Nuclear Science Symposium Conference Record, Honolulu, Hawaii, 1368-1372 (2007), cited hereinabove. This “de facto” implantation could have changed its material properties of the glass and hence its band structure. Overall, N1/2appears to be a good figure of merit to evaluate and compare the degradation of emission yield when a phosphor is exposed to ionizing radiation. Seven materials are listed inFIG.1, viz.: EuD4TEA, YAG, Y2O2S, Gd2O2S, Y2SiO5, Tb3Ga12, and ZnS. With the exception of EuD4TEA, all of the materials listed inFIG.1are relatively radiation-resistant and would take a large fluence to reduce the luminescence enough to be useful as a radiation sensor. However, organics are much more sensitive to radiation, as Schulman et al. determined when they investigated the effects of gamma rays and electrons on the photoluminescence of anthracene and naphthalene. J. H. Schulman, H. W. Etzel, and J. G. Allard,J. Appl. Phys.28, 792-795 (1957). In fact, organics can be six orders of magnitude more sensitive than the inorganics. Due to their sensitivity, organics may be useful for low fluence or low dose applications. EuD4TEA appears to be about three orders of magnitude more sensitive than the other inorganic materials shown inFIG.1. Accordingly, the present inventors believe that EuD4TEA can be used to detect stress/impacts and ionizing radiation at the same time. Tribble disclosed that a spacecraft at 1 AU from the sun will receive a 1 MeV proton fluence of less than 1011mm−2from a large solar event. Likewise, 1 MeV proton fluences in the Earth's radiation belts and the Earth-Moon-Sun Lagrange points will be even less than the 1011mm−2value from large solar events. A. C. Tribble,The Space Environment: Implications for Spacecraft Design, Princeton University Press, Princeton, NJ (2003). The present inventors concluded therefrom that EuD4TEA, which is characterized by a N1/2of about 2.8×1010mm−2, may be a good candidate for use as a personal proton fluence sensor for astronauts in vehicles flying in near earth space. In 2013, Fontenot et al. investigated the effects of uranium on the triboluminescence of EuD4TEA. Hereby incorporated herein by reference is R. S. Fontenot, W. A. Hollerman, K. N. Bhat, and M. D. Aggarwal, “Effects of Added Uranium on the Triboluminescent Properties of Europium Dibenzoylmethide Triethylammonium,”J. Lumin.134, 477-482 (2013). Uranyl acetate, characterized by an activity of 0.2 μCi/g, was added to the synthesis process to determine the effects of uranium on the triboluminescent properties of EuD4TEA. The amount of uranium acetate was varied such that the amount of uranium to europium was 0-100 mol %. After the product formed, they were tested for the triboluminescent properties. The 4 mol % uranium initially increased the triboluminescent yield over the pure EuD4TEA by approximately 80%. The sample was found to have an emission rate that was about twice above the background. However, gains in TL yield decreased with time owing to the emission of radiation from the depleted238U in these samples. The present inventors believe it likely that ionizing radiation emitted from the decay of238U and its corresponding daughter products caused the reduction in emission yield that was observed by Fontenot et al. as reported inJ. Lumin.134, 477-482 (2013), cited hereinabove. In fact, the reduction in fluorescence intensity upon exposure to ionizing radiation from heavy charged particles appears to be similar to what is described in the Birks and Black relation. See J. B. Birks, “Scintillations from Organic Crystals: Specific Fluorescence and Relative Response to Different Radiations,”Proc. Phys. Soc. Sect. A64, 874 (1951); W. A. Hollerman, N. P. Bergeron, F. N. Womack, S. M. Goedeke, and S. W. Allison, “Changes in Half Brightness Dose Due to Preparation Pressure for YAG:Ce,”Nucl. Sci. IEEE Trans.51, 1080-1083 (2004). These radiation particles break chemical bonds, thus reducing the radiative emission in doped EuD4TEA. After 120 days, results showed that the triboluminescent yield for the 4 mol % doped uranium samples was reduced by approximately 20% over the initial value measured when the sample was synthesized. At this rate, it should take approximately 335 days for the TL yield to be reduced to half of its original value. Fontenot et al.,J. Lumin.134, 477-482 (2013), aforecited. This investigation by Fontenot et al. indicated that uranium is a very poor dopant. In the present inventors' view, Fontenot et al.'s investigation also suggested that EuD4TEA is very sensitive to radiation and would serve as a good low-level real time radiation sensor. In this regard, the present inventors note that EuD4TEA emits a (very) bright red radioluminescence under ionizing irradiation, such as illustrated inFIG.2. In 2016 the present inventors (in particular Hollerman, Guardala, and Fontenot) conducted radiation research including radiation measurements. The present inventors irradiated EuD4TEA with gamma rays. Their investigation indicated that EuD4TEA is insensitive to MeV-class photons to a total dose of ˜30 Mrad. Furthermore, of particular importance as relates to the present invention, no radioluminescence was observed under gamma irradiation. In addition, the present inventors irradiated EuD4TEA with ionizing radiation. As depicted by way of example inFIG.2, EuD4TEA emitted a bright red light when it was exposed to 3.42 MeV protons, which is a type of ionizing radiation. As illustrated inFIG.3, the present inventors determined that the intensity of emission exponentially decreased as the fluence was increased. In fact, it took less than two minutes for the protons to completely quench the luminescence. In contrast, when the present inventors exposed EuD4TEA to gamma rays, no luminescence was observed. Moreover, by exciting EuD4TEA with UV light during gamma irradiation, no decrease in luminescence was observed using a standard spectrometer up to ˜30 Mrad. Future testing may be conducted by the present inventors to determine whether radioluminescence is observed for electrons and neutrons. Based on their findings, the present inventors concluded that EuD4TEA can be used as a visual sensor to detect ionizing radiation. The present invention, as exemplarily embodied, is in principle a visual radiation sensor that is based on the luminescent material europium tetrakis dibenzoylmethide triethylammonium (EuD4TEA). Multifarious modes of operation of an inventive sensor are possible, depending upon the inventive embodiment. The present invention's visual sensor includes a luminescent material that emits radioluminescence in an ionizing radiation environment and produces no light for nonionizing radiation such as gammas. The term “EuD4TEA-inclusive device,” as used herein, broadly refers to a distinct physical collection, quantity, mass, body, object, or structure that includes EuD4TEA. An inventively implemented EuD4TEA-inclusive device can take any of various solid, liquid, or mesophase forms and can have any of various physical properties, e.g., soft, hard, porous, non-porous, rigid, flexible, viscous, non-viscous, gelatinous, transparent, translucent, opaque, etc. Furthermore, an inventively implemented EuD4TEA-inclusive device can partially or fully describe any of various two-dimensional or three-dimensional geometric shapes, regular or irregular or some combination thereof, e.g., circular, ellipsoidal, triangular, rectangular, pentagonal, hexagonal, parallelepiped, rectangular prismatic, pyramidal, etc. The term “EuD4TEA-exclusive device,” as used herein, broadly refers to a distinct physical collection, quantity, mass, body, object, or structure that does not include EuD4TEA. According to exemplary inventive practice, an EuD4TEA-inclusive device is made by combining EuD4TEA material with a different material, such as by incorporating, mixing, attaching, joining, adhering, casting, coating, molding, weaving, polymerizing, or otherwise associating EuD4TEA with a non-EuD4TEA entity. An EuD4TEA-inclusive device is thereby provided that can be viewed or monitored by an inventive practitioner. For instance, EuD4TEA liquid material may be coated (e.g., painted) onto a non-EuD4TEA entity. Or, an EuD4TEA-inclusive device can be made through polymerization of EuD4TEA with a different material. Or, an EuD4TEA thin film can be deposited on a substrate. Or, a solid EuD4TEA material can be adhered or otherwise physically attached to a separate entity (such as by using an epoxy adhesive). Or, EuD4TEA crystals (e.g., powder or granules) can be dissolved in a liquid substance. Or, threads containing EuD4TEA can be spun and then woven into a fabric. The following references, each of which is hereby incorporated herein by reference, are instructive regarding synthesis and fabrication of EuD4TEA and of articles and materials (e.g., polymeric or fibrous) that include EuD4TEA: R. S. Fontenot, K. N. Bhat, W. A. Hollerman, and M. D. Aggarwal, “Europium Tetrakis Dibenzoylmethide Triethylammonium: Synthesis, Additives, and Applications,” Chapter 7 (pages 147-235) inTriboluminescence: Theory, Synthesis, and Application, editors David O. Olawale, Okenwa O. I. Okoli, Ross S. Fontenot, and William A. Hollerman, Springer International Publishing, Cham, Switzerland (2016); Ross S. Fontenot, Stephen W. Allison, Kyle J. Lynch, William A. Hollerman, and Firouzeh Sabri, “Mechanical, Spectral, and Luminescence Properties of ZnS:Mn Doped PDMS,”Journal of Luminescence, Volume 170, Part 1, pages 194-199, February 2016 (available online 27 Oct. 2015); Ross S. Fontenot, William A. Hollerman, Kamala N. Bhat, Mohan D. Aggarwal, and Benjamin G. Penn, “Incorporating Strongly Triboluminescent Europium Dibenzoylmethide Triethylammonium into Simple Polymers,”Polymer Journal, Volume 46, pages 111-116, 2014 (published 18 Sep. 2013); U.S. Pat. No. 7,338,877 B1, August Karl Meyer et al., “Multicomponent Fiber Including a Luminescent Colorant,” issued 4 Mar. 2008. Also of interest with regard to the present invention are the following references, each of which is hereby incorporated herein by reference: William A. Hollerman, Ross S. Fontenot, Paul Darby, Nick Pugh, John Miller, “Using Exotic Materials Like Eud4tea and Mgd4tea to Monitor Damage and Radiation Exposure in Extreme Environments,”ECSarXiv, The Electrochemical Society (ECS) (9 May 2018); W. A. Hollerman, R. S. Fontenot, S. Williams, and J. Miller, “Using Luminescent Materials as the Active Element for Radiation Sensors,”Proceedings SPIE9838, inSensors and Systems for Space Applications IX, edited by K. D. Pham and G. Chen (SPIE, Baltimore, MD, USA, 19 Apr. 2016), 98380Z; Stephen A. Williams,Half-Brightness Measurements of Candidate Radiation Sensors, Master's Thesis, a Thesis Presented to the Graduate Faculty of the University of Louisiana at Lafayette in Partial Fulfillment of the Requirements for the Degree Master of Science, University of Louisiana at Lafayette, Summer 2016, published by ProQuest LLC, publication number 10163329 (2016). Exemplary inventive practice is based in part on the phenomenon, discovered and studied by the present inventors, that when EuD4TEA is exposed to charged particles such as protons, a bright red light becomes visible that is indicative of the radiation. The gammas rays produced during such an interaction will not produce light. For instance, a health physicist can incorporate EuD4TEA inside a specimen to visually see his proton or carbon beam. The health physicist can thus fine-tune an ion beam—e.g., a proton beam or a carbon beam—for difficult cancer locations. The inventive methodology can be practiced not only to see proton beams but to see any and all types of particle beams. Exemplary inventive practice provides for use of EuD4TEA as the sole luminescent (e.g., radioluminescent or photoluminescent) substance. Nevertheless, some inventive embodiments provide for use of other luminescent materials, such as manganese-doped zinc sulfide nanoparticles (ZnS:Mn), either instead of or in addition to EuD4TEA. The inventive sensor apparatus as exemplarily embodied provides a visual indication—e.g., a bright red light—that ionizing radiation is present at a particular location. Emphasized in the instant disclosure are medical (e.g., oncological) applications of the present invention. According to exemplary medical embodiments of the present invention, a material that includes EuD4TEA is implemented to achieve more precise directing or imaging, and hence more precise radiotherapy, of cancerous tumors. For instance, in administering proton therapy, an inventive practitioner sees only the beam of protons. That is, the inventive practitioner sees the beam of protons but does not see gammas that could also be produced during the reaction, thereby allowing the inventive practitioner to fine-tune his/her beam to direct the proton radiation at a particular spot. Since inventive practice is efficacious at destroying living tissue, medical applications such as involving cardiac ablation (e.g., to cure heart arrhythmia, such as atrial fibrillation) and orthopedic surgery are also possible. Although medical applications of the present invention are emphasized in the instant disclosure, it is appreciated by the skilled artisan who reads the instant disclosure that the present invention admits of multifarious non-medical applications. For instance, in inventive applications involving security, a bright red luminescence (radioluminescence or photoluminescence) emanating from an inventive sensor can serve as a warning, alerting personnel as to a hazardous presence of ionizing radiation. With reference toFIGS.4and5, a human patient40beset with malignancies lies on a movable treatment couch90to receive external ion beam radiation therapy. Particle accelerator50(e.g., proton accelerator, ion accelerator, linear accelerator, cyclotron, or synchrotron) emits an ion beam500(e.g., proton beam or carbon beam) that is aimed to impinge upon marker devices100, one marker device100at a time. Inventive marker devices100are situated on patient40at selected locations. The markers100are utilized by inventive practitioners (who are, for instance, health physicists or nuclear physicists) to visually perceive and align their ion beams for multifarious medical and non-medical applications. Digital imaging device60(e.g., a device including a camera, image sensor, or photodetector) images the locations of impingement of ion beam500upon patient40's skin41, thus capturing each point of luminescent impingement of ion beam500upon a marker100. Every time that ion beam500impinges upon EuD4TEA that is contained in a marker100, that point of impingement luminesces a bright red light. Imaging device60focuses upon these manifestations of luminescence. Exemplary inventive practice implements a computer70, along with a computer display and one or more peripheral devices, to facilitate inventive practice of ion beam treatment. Computer70includes a processor and memory/storage, both volatile and non-volatile, and is connected to particle accelerator50, imaging device60, and equipment movement apparatus80. According to exemplary inventive embodiments, computer70acts as a processor-controller to control and receive signals or data from accelerator50, imaging device60, and movement apparatus80. The term “movement apparatus,” as used herein in the context of inventive practice, broadly refers to any of various mechanical and electro-mechanical devices that are known in the pertinent arts and that may be used to impart movability to accelerator50, imaging device60, and/or treatment couch90. For instance, movement apparatus80may include a medical gantry, which houses or support accelerator50and provides movability for accelerator50. The medical gantry may include a mechanism that encircles treatment couch90about the longitudinal axis of couch90, with the patient40lying upon couch90in a longitudinal-axial direction. The gantry may serve to adjust the position of accelerator50both lengthwise along, and circumferentially around, couch90's axis. As other examples, movement apparatus80may include devices as gantries, cranes, tripods, dollies, etc., to provide movability for imaging device60or for treatment couch90. Accelerator50and/or imaging device60and/or treatment couch90may be attributed with movability in six degrees of freedom. Inventive practice may provide for utilization of a conventional treatment couch that is electromechanically attributed with six degrees of freedom, e.g., up-and-down, sideways, and longitudinal-axial movability. According to some inventive embodiments, movement apparatus80represents a unit that is capable of imparting synchronous and/or separate movability to accelerator50, imaging device60, and couch90. An ordinarily skilled artisan who reads the instant disclosure will be familiar with known systems and methods, in general, for selectively moving and configuring radiation delivery and/or luminescence-related imaging and/or patient reclining, in the context of administering radiation therapy in accordance with the present invention. Computer70has algorithmic software, resident in its memory, for controlling activation/inactivation, radiation transmission, movement, and positioning of accelerator50and/or imaging device60and/or treatment couch90. Of particular note, according to exemplary inventive practice, computer70controls delivery and intensity of ion beam500with respect to patient40. Patient40may be male or female, and may be a human or a dog or other animal, in keeping with the principles of the present invention. Hence, skin41may be human skin or animal skin, depending on the nature of the patient40. Each marker100is implemented whereby an ion beam500is aimed at and passes through the marker100and a portion of the human40so that ion beam500precisely impinges upon the cancerous tumor1000that is located below the marker100and interior to the human40. Otherwise expressed, ion beam500refracts through marker100and hits the malignant target1000. As shown inFIG.4, patient40is wearing no clothing above his waist, his head is shaved, and he has six markers100placed (e.g., adhered) directly on his skin, viz., three markers100on his head and three markers100on his torso. According to exemplary inventive practice, each marker100is an EuD4TEA-inclusive marker100IN, and is situated at a different location on patient40's skin41. Depending on the inventive embodiment, markers100INmay be embodied, for instance, as a patch, sticker, applique, gel, marking, etc., and may be transparent, translucent, or opaque. Patient40's lower body is covered by a radiation-protective (e.g., lead) shield800. Each EuD4TEA-inclusive marker100INis made so that EuD4TEA material is incorporated therein. According to some inventive embodiments, at least one EuD4TEA-inclusive marker100INis embodied as a solid structure. According to other inventive embodiments, at least one EuD4TEA-inclusive marker100INis embodied as a gel (e.g., a gelatin or other type of gelatinous or mesophase substance), which may be applied to human skin41. As another example, inventive practice is possible whereby an EuD4TEA-exclusive marker100EXand an EuD4TEA-inclusive marker100INare collocated atop human skin41. Exemplary inventive practice involving direct association of EuD4TEA-inclusive markers100INto skin41implements each EuD4TEA-inclusive marker100INeither alone or in collocational combination with an EuD4TEA-exclusive marker100EX. As correspondingly shown inFIG.4andFIG.5, six markers100are placed upon the skin41of patient40at various locations of the head and upper body, and a radiation-protective (e.g., lead) shield800is used to cover the lower body. According to inventive practice exemplified by eitherFIG.4orFIG.5, marker100may be variously embodied, depending on the inventive embodiment; for instance, marker100may be a patch, sticker, applique, gel, marking, etc., and may be transparent, translucent, or opaque. Similarly, depending on the inventive embodiment, covering device300may be transparent, translucent, or opaque. In contrast to the hatless, shirtless patient40shown inFIG.4, the patient40shown inFIG.5is wearing two covering devices300, viz., a head covering (e.g., hat, cap, or helmet)300H and a torso covering (e.g., vest or other garment)300V. Head covering300H may be, for instance, a skullcap or closely fitted hat. As distinguished fromFIG.4,FIG.5depicts three markers100placed on head covering300H and three markers100placed on torso covering300V. Each inventive covering300includes two components, viz., at least one marker100and a covering form200. As used herein in the context of inventive practice, the term “covering form” refers to the basic structure or structural framework200of a covering300. In the inventive example shown inFIG.6, inventive head covering300H includes a head covering form200H and three markers100. In the inventive example shown inFIG.7, inventive torso covering300V includes a torso covering form200V and three markers100. According to exemplary inventive practice, a covering form200is made of a plastic or composite or other solid material (e.g., a transparent polymeric material) and defines the three-dimensional geometric shape of inventive covering300, thus constituting the major structural component of inventive covering300. In essence, covering form200represents the predominant structural mass and shape of inventive covering100, unenhanced by one or more markers100. Hence, hat form200H is essentially inventive hat300, but without any markers100associated therewith. Similarly, vest form200V is essentially inventive vest300, but without any markers100associated therewith. In exemplary practice of the present invention, a given marker100may be embodied as either an EuD4TEA-inclusive marker100INor an EuD4TEA-exclusive marker100EX. When a marker100is implemented so as to affix directly to human skin41, the marker100will generally be embodied as an EuD4TEA-inclusive marker100IN. In contradistinction, when a marker100is implemented as part of an inventive covering300such as head covering300H or torso covering300V, the marker100may be embodied as either an EuD4TEA-inclusive marker100INor an EuD4TEA-exclusive marker100EX. The characteristic of a marker100as either an EuD4TEA-inclusive marker100INor an EuD4TEA-exclusive marker100EXmay depend on the characteristic of its associated covering300as either an EuD4TEA-inclusive covering300INor an EuD4TEA-exclusive covering300EX. Some inventive embodiments provide for one or more EuD4TEA-exclusive markers100INin association with an EuD4TEA-inclusive covering form200IN. Some other inventive embodiments provide for one or more EuD4TEA-inclusive markers100INin association with an EuD4TEA-exclusive covering form200IN. According to exemplary inventive practice, the contrastive distinction in terms of EuD4TEA-containment and EuD4TEA-noncontainment serves to facilitate the inventive practitioner's effort to aim ion beam500with pinpoint accuracy in the direction of the targeted tumor1000. According to some inventive embodiments, the visual contrast between an EuD4TEA-inclusive marker100INand human skin40will similarly assist the inventive practitioner in directing the ion beam500. Exemplary inventive practice involves use of at least one EuD4TEA-inclusive device which, depending on the inventive embodiment, includes at least one EuD4TEA-inclusive marker100INand/or at least one EuD4TEA-exclusive marker100EX. An inventive EuD4TEA-inclusive device may be embodied, for instance, primarily as an EuD4TEA-inclusive marker100IN, or primarily as an EuD4TEA-inclusive covering300. An inventive EuD4TEA-inclusive covering300may be embodied as an EuD4TEA-inclusive head covering300H or as an EuD4TEA-inclusive body (e.g., torso) covering300V. An EuD4TEA-inclusive head covering300H may be, e.g., a kind of hat, cap, or helmet. An EuD4TEA-inclusive body covering300V may be, e.g., a kind of vest, smock, apron, cast, or sleeve, serving to cover, e.g., an arm, a leg, the torso, or one or more areas thereof. FIGS.8,9, and11through15illustrate the positioning of a marker100in super-positional relationship and in co-extensive and/or super-extensive relationship, with respect to a targeted tumor1000inside the head or body of a human40. According to the view ofFIGS.8,9, and11through15, tumors1000are understood to be located in or below the skin41(e.g., in body organ or tissue42) and to be seen in see-through fashion. Tumor1000is characterized by a tumor border1001. According to exemplary inventive practice, the target zone31of inventive marker100extends to or beyond the perimeter of tumor1000, so that the visible luminescence of an ion beam500passing through inventive marker100completely encompasses tumor1000. Otherwise expressed, target zone31is at least coextensive with tumor1000, wherein target zone delineation33circumscribes or surrounds tumor border1001. Ion beam500passes through the entire area of target zone31, thereby entirely enveloping the tumor1000situate beneath target zone31. For instance, in completely encompassing tumor1000, ion beam500may extend slightly or somewhat beyond tumor1000so as to be proximate the side surfaces and far end surfaces of tumor1000, such as shown inFIGS.8and9. As depicted inFIGS.4through7and11through15, an inventive marker100can have any of a variety of shapes, sizes, and compositions. Markers100may be constituted as solid coverings having selected physical characteristics, such as small patches applicable at selected locations anywhere on the human body. A solid EuD4TEA-inclusive marker100may be “self-marked,” for instance by an EuD4TEA-exclusive marking (e.g., an “X” or an “O”) thereupon. Conversely, a solid EuD4TEA-exclusive marker100may be “self-marked,” for instance by an EuD4TEA-inclusive marking thereupon. Whether EuD4TEA-inclusive or EuD4TEA-exclusive, a marker100may be devised to have practically any shape, including but not limited to curved, straight, rectilinear, curvilinear, polygonal (e.g., triangular, rectangular, pentagonal, hexagonal, etc.), cylindrical, annular, “X”-shaped, “O”-shaped, round (e.g., circular, oval, elliptical, etc.), or some combination thereof. An ion beam500that is directed toward the area of an EuD4TEA-inclusive marker100that is placed on human skin41will yield visible light only when ion beam500impinges upon the EuD4TEA-inclusive marker100itself, the entire EuD4TEA-inclusive marker100thus luminescing. The ion beam is transmitted through the marker and a portion of the human head or body, finally hitting the malignant target inside the human head or body. According to some inventive embodiments, a gel marker100is applied as a small spot on skin41with pinpoint accuracy. An ion beam500may be directed to the entirety of a gel spot marker100, which is configured in size and shape to encompass the entire area of a malignancy1000. As exemplarily embodied, an inventive marker100has an outer perimeter, referred to herein as either a target zone delineation33or a vicinity zone delineation34, depending upon the inventive embodiment. If, for example, marker100is configured as completely EuD4TEA-inclusive, such as shown inFIGS.12through14, the outer perimeter of marker100is target zone delineation33. According to such inventive embodiments, marker100has a target zone31but does not have a vicinity zone32. According to some embodiments of the present invention, a marker100is characterized by a target zone31and a vicinity zone32. As illustrated inFIGS.11and15, a target zone delineation33delimits a target zone31, and a vicinity zone delineation34delimits a vicinity zone32. For instance, it may be considered that in a two-zone marker100, target zone31represents an EuD4TEA-inclusive central area of marker100, and vicinity zone32represents an EuD4TEA-exclusive peripheral area of marker100; marker100is configured as having an EuD4TEA-inclusive central region31and an EuD4TEA-exclusive peripheral region31. Alternatively, it may be considered that in a two-zone marker100, target zone31represents an EuD4TEA-exclusive central area of marker100, and vicinity zone32represents an EuD4TEA-inclusive peripheral area of marker100; marker100is configured as having an EuD4TEA-exclusive central region31and an EuD4TEA-inclusive peripheral region31. By way of example, a continuous EuD4TEA-inclusive gelatinous marker100may be collocated beneath a toroidal or annular EuD4TEA-exclusive solid marker100. The exterior marker100does not contain any EuD4TEA. According to this combination of an outer annular solid EuD4TEA-exclusive marker100component and an inner gelatinous EuD4TEA-inclusive marker100component that is round or oval or irregularly shaped. Ion beam500is aimed at the EuD4TEA-inclusive interior gelatinous marker100component, which is deposited on human skin41in circumscriptive geometric relationship to tumor1000. As another inventive example, a continuous EuD4TEA-exclusive solid marker100may be collocated atop and surrounded by a continuous EuD4TEA-inclusive gelatinous marker100deposited on human skin41. According to this combination of an inner solid EuD4TEA-exclusive component and an outer gelatinous EuD4TEA-inclusive component, ion beam500may be aimed at the EuD4TEA-exclusive solid centroid component encompassing tumor1000, somewhat in a manner of hitting a non-luminescent bullseye encircled by a luminescent ring. By way of further inventive example, a transparent gelatinous EuD4TEA-inclusive material may be placed on a larger area encompassing the smaller area of the malignancy. The target area is within and a subset of the entire gel-covered area. A marking (e.g., an “X” or an “O”) is placed on the skin underneath the EuD4TEA-inclusive gel at the precise target location. The beam remains visible in the gel-covered area and is directed by the inventive practitioner to “hit the spot”, i.e., the marked target (e.g., an “X” or an “O”). The direction of the beam-induced luminescence is adjusted until it hits the spot, e.g., the center of the X-marking or the O-marking. Human skin (or animal skin) may be marked immediately beneath a transparent EuD4TEA-inclusive gel. For instance, EuD4TEA-inclusive gelatinous marker100may be applied directly to human skin41(e.g., directly deposited as a thin layer on human skin41) and directly over an EuD4TEA-exclusive graphic-marking marker100(such as an “X” or an “0”), which for instance may be drawn upon or adhered to a small area of human skin41. FIG.10conveys in a tabular presentation that, according to numerous variations of exemplary inventive practice, marker1000may be: EuD4TEA-inclusive or EuD4TEA-exclusive; embodied as a solid marker1000or a gelatinous marker1000; and implemented in direct association with a patient40's skin41or in combination with a covering form200. A covering form200may be an EuD4TEA-inclusive covering form200or an EuD4TEA-exclusive covering form200. Similarly, a medical phantom form250may be an EuD4TEA-inclusive medical phantom form250or an EuD4TEA-exclusive medical phantom form250. According to usual inventive practice of inventive medical phantoms350, medical phantom form250is transparent so that the luminescent light is visible all the way through the medical phantom form250until reaching the site of the facsimile tumor1000. Inventive practice may involve use of an EuD4TEA-exclusive marker1000in contrastive combination with an EuD4TEA-inclusive marker1000, or with an EuD4TEA-inclusive covering form200, or with an EuD4TEA-inclusive phantom form250. However, as indicated inFIG.10, inventive practice will usually not involve use of an EuD4TEA-exclusive marker1000in such a way that there is no proximate EuD4TEA-inclusive material to afford contrastive visibility or discernment to the EuD4TEA-exclusive marker100. Similarly, inventive practice will usually not involve use of an EuD4TEA-inclusive marker100in such a way that there is no proximate EuD4TEA-exclusive material to afford contrastive visibility or discernment to the EuD4TEA-inclusive marker1000. A solid covering form200may be transparent or nontransparent (e.g., translucent or opaque), and may be rigid (e.g., firm) or flexible (e.g., resilient or elastic). According to usual inventive practice, a transparent solid phantom form250is transparent and may be rigid or flexible. An example of a solid covering form200is a structure made of a solid plastic containing EuD4TEA that is, for instance, uniformly distributed throughout the structure. As further examples, nontransparent solid covering form200may be a nontransparent solid EuD4TEA-containing plastic or a garment made of EuD4TEA-containing threads (e.g., wherein the threads are uniformly distributed throughout the fabric or cloth of the garment). An EuD4TEA-inclusive or EuD4TEA-exclusive marker100structure may be configured as a hat, cap, or helmet for covering at least a portion of the head, or as a vest, shirt, or shield for covering at least a portion of the torso, or as a wrapping, bracelet, or brace for covering at least a portion of an arm or a leg. Generally speaking, many malignancies are found in the head and/or torso, where major tissue and organs are located. An EuD4TEA-inclusive marker100(e.g., structure or gel) may be situated above a malignant target region so as to be co-extensive and/or super-extensive with respect to the malignant target region. The beam500is visible only when it hits the malignancy1000; when the beam500does not hit the malignancy1000, the beam500is not visible. If marker100is strictly co-extensive with respect to malignancy1000, the entire luminescence of the beam corresponds to the entire malignant area. If marker100is at least partly super-extensive with respect to malignancy1000, beam500is visible not only in the malignant area1000but also in the vicinity of the malignant area1000. Ion beam500may be precisely aimed at a malignant point in a manner akin to adjusting the direction of a rangefinder or rifle using a laser beam to indicate where a laser beam hits a target point. FIGS.16and17illustrate, by way of example of inventive practice, two different inventive embodiments of a medical phantom350, which includes a medical phantom form250and at least one tumor reproduction1000R located in the interior of medical phantom350. Medical phantom form250is a replica of a person or portion of a person, e.g., a person's head or body or body part.FIG.16depicts an inventive human head phantom350H having phantom head form250H and tumor reproduction1000R.FIG.17depicts an inventive human torso phantom350V having phantom torso form250V and tumor reproduction1000R. The three-dimensional phantom form250may include three-dimensional representations of anatomical features such as ribs, scapulae, and clavicles. Phantom form250and tumor reproduction1000R serve to simulate or duplicate, to scale, the actual human or human part and the tumor situate therein of the patient being treated. A phantom form250may be made of, for instance, a solid polymeric (e.g., plastic) material or a composite material. An inventive medical phantom350can be fabricated, for example, whereby some EuD4TEA is incorporated (e.g. doped) into PDMS (polydimethylsiloxane). An inventive phantom form250includes EuD4TEA but does not include any tumor reproduction1000R. The EuD4TEA-containing PDMS medical phantoms350may be molded or casted, for instance, as duplicating real-life shapes of human or animal patients. In the fabrication process, the tumor replica(s)1000R is/are placed inside phantom350with requisite precision in correspondence to the locations of the actual tumor(s)1000inside person40. In accordance with frequent practice of the present invention, inventive practice of covering form200and inventive practice of phantom form250may involve selection of same or similar materials. Like an exemplary transparent solid covering form200, an exemplary transparent solid phantom form250is a structure made of a transparent solid plastic containing EuD4TEA uniformly throughout the structure. An inventive medical phantom350may represent practically any transparent solid device of selected size and shape and selected firmness/rigidity or flexibility. For instance, a transparent EuD4TEA-containing phantom250structure can be configured as a head or a torso. According to exemplary inventive practice involving inventive one or more inventive medical phantoms350, a simulation of radiation delivery using an inventive phantom350is performed in order to plan a subsequent actual delivery of the radiation. Plural inventive simulations may be performed in order to plan plural inventive deliveries of radiation. For example, an inventive medical phantom350may be situated on a treatment couch90precisely where the patient40or corresponding portion thereof will subsequently be situated. The medical physicist or radiologist can adjust ion beam500with respect to the geometric and radiative characteristics of the ion beam. For instance, the direction and intensity of a beam may be adjusted in accordance with the visibility of the beam in the three-dimensional space within the phantom structure. The brighter is the luminescence (red light), the more intense is the beam. It is desirable to concentrate the most intense ion radiation on the malignancy. The most intense beams can be adjusted in strength and direction to maximize radiation delivery in comportment with the exact locations of the malignancies in the three-dimensional space inside the phantom. In terms of visual contrast between EuD4TEA-inclusive material and EuD4TEA-exclusive material, there are at least two EuD4TEA-related modes of inventive practice involving a medical phantom350. According to a first mode of inventive phantom practice, phantom form250is transparent and EuD4TEA-inclusive, and interior tumor reproduction100R is EuD4TEA-exclusive (and may be either transparent or non-transparent). EuD4TEA-exclusive tumor reproduction1000R is contained in an EuD4TEA-inclusive matrix, viz., EuD4TEA-inclusive phantom form250. According to a second mode of inventive phantom practice, phantom form250is transparent and EuD4TEA-exclusive, and interior tumor reproduction100R is transparent and EuD4TEA-inclusive. EuD4TEA-inclusive tumor reproduction1000R is contained in an EuD4TEA-exclusive matrix, viz., EuD4TEA-exclusive phantom form250. The present invention's first mode of phantom structure is often a preferable mode of practice. According to exemplary inventive practice of the first phantom mode, the transmission of ion beam500is directed linearly through the interior of phantom350, with tumor reproduction1000R in the direct path of ion beam500. The geometric path of the ion beam500continues in a straight line commencing from accelerator50and proceeding through and beyond the inventive phantom350. While traveling through a portion of the interior of phantom form250, ion beam500is visible until reaching tumor reproduction (replica)1000R. Ion beam500appears as a “beacon” of visible light inside phantom form250, thereby conveying where the ion particle radiation is going and where the ion particle radiation is not going. Since the phantom form250is EuD4TEA-inclusive and the tumor reproduction1000R is EuD4TEA-exclusive, a visible ion beam500ceases to be visible when it impinges upon tumor reproduction1000R. Depending on the intensity of the ion beam500after having exited (completely passed through) the tumor reproduction1000R, the ion beam500may become visible again (e.g., brightly visible, moderately visible, or slightly visible) or may remain invisible. The intensity of the ion beam500is proportional to the EuD4TEA-caused visibility of the ion beam500. The more intense is the ion beam, the brighter is the ion beam. The brighter is the ion beam, the more intense is the ion beam. Decreased brightness of the ion beam implies decreased intensity of the ion beam. Decreased intensity of the ion beam implies decreased brightness of the ion beam. The inventive practitioner adjusts the beam to maximize the intensity of the radiation at the exact location of tumor reproduction1000R, and to minimize the intensity of the radiation along the path of the beam other than this exact location of tumor reproduction1000R. A paramount goal is to maximize the benefit of the radiation treatment in terms of defeating cancerous tissue, and to minimize the detriment of the radiation treatment in terms of damaging healthy tissue. According to exemplary inventive practice of the second phantom mode, the ion beam500transmission is similarly directed through the interior of phantom form250, with tumor reproduction1000R in the direct path of ion beam500. While traveling through a portion of the interior of phantom form250, ion beam500is invisible until reaching tumor reproduction1000R. When ion beam500impinges upon tumor reproduction1000R, ion beam500appears inside phantom form250as a region of visible light corresponding to and coincident with tumor reproduction1000R. Since phantom form250is EuD4TEA-exclusive and tumor reproduction1000R is EuD4TEA-inclusive, ion beam500is invisible and becomes visible when it impinges upon tumor reproduction1000R. Inventive practice of medical phantoms350is capable of optimizing the radiation delivery to the real human patient40by experimentally defining, in advance of the radiation delivery, parameters including the patient40's entry point (location) E of ion beam500, the direction Ø of ion beam500, the depth penetration p of ion beam500, and the intensity of ion beam500. For instance, the beam entry point E, the beam direction Ø, the depth penetration p, and the location (e.g., center or centroid) of the tumor reproduction1000R may each be at least partially described in three dimensions, such as in three-dimensional Cartesian space. For instance, ion beam depth penetration p may be at least partially described in terms of the distance traveled by the beam between the human skin41surface and the tumor1000, or the distance traveled by the beam between the human skin41surface and the endpoint of a beam segment that encompasses the tumor1000(such as shown inFIGS.8and9). The empirical data that is inventively acquired using one or more phantoms350may thus be corresponded and translated to the ensuing actual delivery of the ion particle radiation to the patient. Beam entry point E is the location at which ion beam500intersects the surface of the patient's skin41, when ion beam500enters patient40. According to some inventive embodiments, beam entry point E is predetermined (preselected), and the remaining parameters are then experimentally determined. For instance, a marker150may be initially placed on an inventive phantom350, and then the inventive test simulation may be conducted. Marker150represents the entry point E of ion beam500. The simulative determination of optimal parameters of the radiation delivery thus presupposes this location E of the phantom marker150. According to other inventive embodiments, beam entry point E is among the parameters that are experimentally determined. According to exemplary inventive practice implementing inventive phantoms350, the entry point E of the phantom350correlates to the marker100of the human40. If such inventive practice involves preselection of the entry point E, a marker150may be used during the simulative testing to indicate the entry point E, which in turn correlates to the marker100of the human40. A proton beam (e.g., 250 MeV) is invisible to the naked eye. As such, according to conventional practice of ion beam therapy, theoretical calculations must be performed (e.g., by a medical physicist) to determine the location and direction of an ion beam inside a patient's body. Medical physicists currently base their calculations on a Bragg pattern. As compared with current oncological practice, the present invention is significantly advantageous in its ability to administer pinpoint delivery of radiation. In major contradistinction to conventional practice, a practitioner of the present invention can visually “see” an ion beam500through utilization of an inventive medical phantom350, which contains EuD4TEA material. The inventive practitioner (e.g., medical physicist) can fine-tune the ion beam location and orientation/trajectory based on the visible light emitted by the EuD4TEA. According to exemplary inventive practice, when an ionization beam such as proton or carbon enters an inventive phantom, the beam distributes its energy in a Bragg pattern. This deposited energy interacts with the EuD4TEA material inside the inventive phantom350, causing it to produce a visible red light. The amount of light produced is directly proportional to the amount of energy deposited. Since inventive phantom form250is transparent, an inventive practitioner (e.g., a medical physicist) may use this visible red light to fine-tune ion beam500such that the brightest spot of ion beam500is located on the cancerous entity (e.g., tumor)1000. The path of ion beam500extends in a straight geometric line from accelerator50to and beyond tumor1000and continues outside the patient's head or body40. Similarly, the path of ion beam500extends in a straight geometric line from accelerator50to and beyond tumor replica1000R and continues outside phantom350. Variations in the intensity of beam500may manifest in EuD4TEA-inclusive phantom350at every point along the path of beam500, including before tumor1000impingement, during tumor1000impingement, and after tumor1000impingement. As a general rule for inventively administering radiation therapy to a living being, the less intense the beam radiation before and after hitting tumor1000, the better. Tumor1000is the only location of the transmitting medium that is intended to be radiated. In fact, all of the non-malignant portions of the transmitting medium should not be radiated at all. As a practical matter in some inventive applications, a non-malignant portion of the transmitting medium cannot be entirely free of radiation when radiation is administered. In such cases, the inventive practitioner will usually strive, at least, to radiate the non-malignant portion as minimally as possible. Multifarious embodiments and applications of the present invention are possible. Uniquely and with great effectiveness, the present invention avails itself of a distinctive capability of EuD4TEA. In medical contexts such as cancer treatment involving radiation therapy, exemplary practice of the present invention uses EuD4TEA-inclusive material as a beacon for proton therapy and other types of ion beam therapy. An entire working system according to the present invention may include, for example, an accelerator (e.g., cyclotron) and various feedback and control systems, which are employed based on signals received from an EuD4TEA beacon that guides as to where the ion beam is going and where the ion beam is not going. The present invention, which is disclosed herein, is not to be limited by the embodiments described or illustrated herein, which are given by way of example and not of limitation. Other embodiments of the present invention will be apparent to those skilled in the art from a consideration of the instant disclosure or from practice of the present invention. Various omissions, modifications, and changes to the principles disclosed herein may be made by one skilled in the art without departing from the true scope and spirit of the present invention, which is indicated by the following claims.
56,287
11857807
DETAILED DESCRIPTION Various embodiments hereof provide approaches to planning treatments involving a microbubble-enhanced ultrasound procedure for targeted drug delivery, radiation therapy or any other applicable therapeutic methods within one or more tissue regions that include target tumor tissue, a target BBB region in the vicinity of the target tumor tissue, and in some embodiments, non-target tissue. Treatment planning often has the dual goals of achieving the desired treatment effect in the target tumor tissue and target BBB region (e.g., tumor ablation and BBB disruption, respectively), while at the same time avoiding damage to non-target tissue.FIG.1is a flow chart illustrating an exemplary treatment-planning approach100in accordance with various embodiments. As shown, treatment planning may begin, in step102, with acquiring images of the patient's anatomy within a region of interest using an imaging device. The images may be 3D images or a set of 2D image slices suitable for reconstructing 3D images of the anatomic region of interest. The imaging device may be, for example, a magnetic resonance imaging (MRI) device, a computer tomography (CT) device, a positron emission tomography (PET) device, a single-photon emission computed tomography (SPECT) device, or an ultrasonography device. In step104, a target tumor region and a target BBB region in the vicinity of the target tumor for which treatment is to be planned are identified automatically using suitable image-processing techniques or manually by a user; the target tumor and BBB regions may be defined as collections of 3D voxels. In some treatment scenarios, the target tumor consists of multiple discontiguous regions; for example, a cancer patient may be afflicted with multiple tumors or metastases that are to be treated individually. As a result, multiple BBB regions corresponding to the discontiguous tumor regions may be disrupted sequentially or substantially simultaneously. Even if the target tumor is a single, contiguous tissue region, it may span a large volume such that more than one target BBB region (disjunctive or overlapping) is necessary for effective treatment. For example, this may advantageously allow better control over ultrasound beam properties for each of the target BBB regions. In various embodiments, the target BBB region is divided in a manner that allows treatment planning for different BBB regions to be performed sequentially, possibly taking the effect of disruption of one BBB region into account when planning treatment for subsequent BBB regions, but without incurring the risk of revisiting a treatment procedure for the BBB region for which treatment planning was previously deemed complete. Once the target BBB region is selected for treatment planning, ultrasound parameter values (e.g., amplitudes, frequencies, phases and/or directions associated with the transducer elements, or time intervals between consecutive series of sonications) are computed so that a focal zone is created at the target BBB region (in step106). This step generally applies a physical model and takes into account the geometry as well as the position and orientation of the ultrasound transducer relative to the target BBB region. In addition, anatomic characteristics (e.g., the type, property, structure, thickness, density, etc.) and/or material characteristics (e.g., the energy absorption of the tissue at the employed frequency or the speed of sound) of the intervening tissue located on the beam path between the transducer and the target BBB region may be included in the physical model in order to predict and correct for beam aberrations resulting therefrom. In one implementation, the anatomic characteristics of the intervening tissue are acquired using the imaging device. For example, based on the acquired images of the anatomic region of interest, a tissue model characterizing the material characteristics of the target and/or non-target regions may be established. The tissue model may take the form of a 3D table of cells corresponding to the voxels representing the target and/or non-target tissue; the values of the cells represent characteristics of the tissue, such as the speed of sound, that are relevant to beam aberrations when traversing the tissue. The voxels are obtained tomographically by the imaging device and the type of tissue that each voxel represents can be determined automatically by conventional tissue-analysis software. Using the determined tissue types and a lookup table of tissue parameters (e.g., speed of sound by type of tissue), the cells of the tissue model may be populated. Further detail regarding creation of a tissue model that identifies the speed of sound, heat sensitivity and/or thermal energy tolerance of various tissues may be found in U.S. Patent Publication No. 2012/0029396, the entire disclosure of which is hereby incorporated by reference. Accordingly, based on the anatomic and/or material characteristics of the target/non-target tissue, the physical model may predict ultrasound beam paths, ultrasound energy delivered to the target BBB region and/or non-target regions, the conversion of ultrasound energy or pressure into heat and/or tissue displacement at the target BBB region and/or non-target regions, and/or the propagation of the induced effects through the tissue. Typically, the simulation may take the form of (or include) differential equations. For example, the physical model may include the Pennes model and a bioheat equation to simulate heat transfer in tissue. Approaches to simulating the sonications and their effects on the tissue are provided, for example, in U.S. Patent Publication No. 2015/0359603, the entire disclosure of which is hereby incorporated by reference. In an optional step108, microbubbles having selected characteristics are computationally introduced to the defined target BBB region; the microbubble characteristics may include an agent type, a size distribution, a concentration, an administration profile (e.g., a dose and an administration timing) and/or an associated location of the site where the microbubbles are computationally administered. At a relatively low acoustic power (e.g., 1-2 Watts above the microbubble-generation threshold), the generated microbubbles tend to undergo oscillation with compression and rarefaction that are equal in magnitude and thus the microbubbles generally remain unruptured (i.e., a “stable cavitation”). At a higher acoustic power (e.g., more than 10 Watts above the microbubble-generation threshold), the microbubbles undergo rarefaction that is greater than compression, which may cause inertial (or transient) cavitation of the microbubbles in which the microbubbles in the liquid rapidly collapse. The microbubble cavitation, in turn, may result in transient disruption of the tissue in the target BBB region. In various embodiments, the microbubble characteristics are empirically determined based on retrospective study of the patients experiencing a microbubble-enhanced ultrasound procedure. For example, the retrospective study may establish relationships between the microbubble response (e.g., a temporal acoustic effect of the microbubbles after each ultrasound sonication pulse and/or a cumulative effect of the microbubbles over a single sonication or multiple sonications) and microbubbles having various characteristics at given ultrasound settings. Microbubble characteristics having the desired microbubble response for disrupting the target BBB region may then be selected. Additionally or alternatively, the microbubble characteristics may be selected using the physical model. For example, the size distribution of the microbubbles may be selected such that a significant fraction (e.g., more than 50%, 90%, 95%, or 99% or more) of the microbubbles have a radius below that corresponding to a resonance frequency equal to the applied ultrasound frequency (so that the microbubble resonance frequency exceeds the applied ultrasound frequency). This may maximize microbubble response to the applied ultrasound at the target BBB region relative to the microbubble response within the healthy tissue surrounding the target BBB region, as well as tissues along the path between the transducer and the target BBB region. As a result, microbubbles at the non-target region are unresponsive to the relatively low acoustic field to avoid tissue damage, whereas microbubbles at the target region (where the acoustic field is relatively high due to the focused beam) may oscillate and/or collapse, thereby causing tissue disruption effect. Approaches to determining and selecting a desired size distribution of microbubbles are provided, for example, in U.S. Patent Application entitled “Ultrasound Frequency and Microbubble Size Optimization in Microbubble-Enhanced Ultrasound Treatment” filed on even date herewith, the contents of which are incorporated herein by reference. In step110, a series of sonications is then computationally applied to the microbubbles at the target BBB region in accordance with the determined ultrasound parameter values. In step112, the simulation may predict an acoustic response from the microbubbles based on the applied ultrasound parameter values, the geometry of the ultrasound transducer and its position and orientation relative to the microbubbles, the anatomic/material characteristics of the intervening tissue and the characteristics of the microbubbles. For example, the acoustic response may include an acoustic response level representing the temporal acoustic effect of the microbubbles after each sonication pulse and/or an acoustic response dose that represents the cumulative effect of the microbubbles over a single sonication or multiple sonications. In some embodiments, the treatment-planning simulation computationally predicts creation of additional microbubbles induced by the applied acoustic energy and the injected microbubbles. Therefore, the acoustic response from the microbubbles may be predicted from a combination of the injected microbubbles and microbubbles additionally created in the target BBB region during sonications. In step114, the parameter values (e.g., power of energy) of the applied ultrasound waves/pulses and/or the characteristics of the injected microbubbles are computationally adjusted until a microbubble cavitation event occurs at the target BBB region. For example, increasing the acoustic power may generally induce microbubble cavitation. But because increasing the acoustic power may also cause damage to the intervening tissue and/or tissue surrounding the target BBB region, in some embodiments, the acoustic power may have an upper boundary. Once the boundary is reached, the simulation may increase, for example, the microbubble concentration and/or microbubble size (such that the resonance frequency of microbubbles differs less from the ultrasound frequency) in order to cause the cavitation, instead of increasing acoustic power; this avoids undesired damage to the non-target tissue. In step116, the tissue disruption effect of the target BBB region and/or non-target regions resulting from microbubble cavitation is computationally predicted based at least in part on the established tissue model of the target BBB region and/or non-target regions. The disruption effect may include the volumetric size of the disrupted BBB region and the estimated degree of disruption, and in one implementation, is captured with a suitable biological parameter (e.g., a vessel size in the target BBB region, the perfusion rate in the target BBB region, an opening size or degree of the target BBB region, the rate at which molecules pass through the BBB region (e.g., the tissue permeability rate, and/or the size of the molecules that are to pass through the BBB region) associated with the target BBB region. In one embodiment, the disruption effect is predicted based on a retrospective study of the patients who have undergone ultrasound-induced cavitation prior to clinical treatment. For example, before clinical treatment, an MRI contrast agent having substantially the same molecular weight (or other size metric) as the therapeutic agent to be injected for tumor treatment may be introduced into the target BBB region after the cavitation event occurs. By monitoring the way that the MRI contrast agent penetrates and diffuses in the target BBB region, it can be determined whether and to what extent the target BBB region has been opened to the therapeutic agent to be injected. The relationship between the cavitation effect and tissue disruption effect may then be established and incorporated into the physical model underlying the simulation. This relationship may empirically improve as the number of patients and/or performed treatments increases. Approaches to empirically establishing the relationship between the microbubble cavitation and tissue disruption effect are provided, for example, in U.S. Patent Application entitled “Cavitation-Enhanced Targeted Drug Delivery and Dosing” filed on even date herewith, the contents of which are incorporated herein by reference. In some embodiments, the treatment-planning simulation also predicts the tissue disruption effect of the non-target regions using similar approaches. In step118, the computed tissue disruption effect of the target BBB region and/or non-target regions is then compared against a target objective (such as the desired target value of the biological parameter described above and/or the safety threshold associated with the non-target region). If the computation deviates from the desired target objective by more than a tolerable amount (e.g., 10%), the simulated treatment procedure is adjusted (step120). The adjustment may be implemented in two approaches. In the first approach, the treatment plan is rolled back, and steps106-120are iteratively performed until the simulated tissue disruption effect achieves the target objective. In another approach, the treatment plan is extended to include further sonications—i.e., new settings of the treatment profile parameters (e.g., ultrasound parameters and/or characteristics of additional microbubbles) are introduced to treat the target BBB region; again, the treatment plan may be extended until the simulated tissue disruption effect achieves the target objective (step122). As a result, the 4D treatment plan allows various treatment profile parameters (e.g., sonication properties and/or the microbubble characteristics) to be dynamically adjusted for efficiently and safely treating the 3D target BBB region as a function of time. Adjustment of the treatment profile parameters generally involves adjusting the microbubble characteristics and/or the ultrasound transducer parameter settings. In one embodiment, the administration rate, dosage, concentration, and/or timing of the administration of microbubbles is computationally tailored to optimize the treatment efficiency (e.g., transiently disrupting tissue in the target BBB region to the target degree so as to create a therapeutic effect or to allow a therapeutic agent to penetrate therethrough) and/or safety (e.g., limiting damage to the non-target tissue). Generally, the optimal microbubble concentration depends on the desired acoustic power—a higher concentration of microbubbles is typically preferred to permit use of a lower acoustic power to achieve microbubble cavitation. When, for example, the tissue surrounding the target BBB region is sensitive to acoustic energy, low-power sonications are employed to avoid damage to the surrounding tissue. In this case, the microbubble concentration may be increased to ensure that the low-power sonications still induce sufficient cavitation events for disrupting the target BBB tissue. Additionally or alternatively, the size of the microbubbles may be increased in subsequent dose(s) such that the resonance frequency thereof differs less from the selected optimal ultrasound frequency; this may cause more microbubble collapse at the target BBB region. In some embodiments, a constant microbubble concentration at the target BBB region is desired so that the tissue disruption is steady rather than varying in magnitude over time. The concentration of injected microbubbles in the patient's bloodstream varies over time in accordance with well-understood principles of pharmcokinetics, rising to a peak level following administration and then falling; any administration profile, in other words, results in a predictable concentration profile. Hence, the treatment-planning simulation may generate a steady concentration level by computationally simulating injection of the microbubbles at a constant rate and waiting for them to diffuse and reach a steady state, with continued injection to maintain the steady-state condition. Alternatively, the microbubbles may be computationally injected at a relatively higher rate for initiating the treatment; a relatively lower injection rate may then be used during the course of simulated treatment. In other embodiments, it is desired to increase the microbubble concentration as the treatment proceeds (so as to reduce the sonication power for safety purposes); in this case, the treatment-planning simulation may continuously or discretely increase the microbubble injection rate over time. Regardless of which administration profile is used, when determining the microbubble administration profile, the acoustic power (e.g., temporal acoustic power) and/or acoustic energy (including cumulative power and acoustic effects during the entire sonication procedure) may be taken into account to avoid undesired damage to the target and/or non-target tissues. Similarly, the treatment-planning simulation may dynamically adjust the acoustic power and/or acoustic energy based on the predicted microbubble concentrations available during the sonication procedure for disrupting the target BBB tissue. This is critical particularly at the beginning and end of microbubble administration when the microbubble concentration is changing (i.e., not at a steady state). In some embodiments, the acoustic power/energy is computationally increased in response to a reduced microbubble concentration. Increasing the acoustic power/energy above a threshold may beneficially cause generation of the microbubbles in the target BBB region, thereby compensating for the reduced microbubble concentration. In addition, adjustment of the microbubble administration profile, the acoustic power/energy profile, beam shape profile, and/or a combination thereof may facilitate the disruption rate of the target BBB region. In some embodiments, a desired disruption rate of the target BBB region as a function of time is determined based on the anatomic/material characteristics of the target BBB region and/or the non-target regions for optimizing the treatment efficiency and/or safety. During simulation of the ultrasound treatment, the microbubble administration profile and/or the acoustic power/energy profile may be adjusted until the desired disruption rate is achieved and maintained. Controlling the BBB disruption rate is important because, as explained above, an excessive rate can produce a safety hazard, whereas an insufficient rate reduces efficiency and can also compromise safety, since the duration of treatment can itself pose risks to the patient. In various embodiments, the acoustic power and/or acoustic energy is controlled between lower and upper boundaries to ensure efficient treatment and patient safety. The lower boundary corresponds to a treatment threshold (i.e., the minimum applied energy needed to induce microbubble cavitation and cause tissue disruption of the target BBB region) and the upper boundary corresponds to a safety threshold (i.e., the maximum tolerable energy that does not damage the intervening tissue and/or tissue surrounding the target BBB region). Again, the treatment-planning simulation may dynamically determine these lower and upper boundaries based on the available microbubble concentrations at the target BBB region. Additionally, the sonication profile (e.g., a time interval between different sequences of sonications) may be dynamically adjusted. For example, the time interval between two sequences of sonications may be increased to allow microbubbles to be replenished at the target BBB region before the next sonication sequence. In various embodiments, it may be desirable to disrupt multiple BBB regions corresponding to a single tumor region or multiple discontiguous tumor regions. In a preferred implementation, the multiple target BBB regions are treated sequentially (e.g., in round-robin fashion) until, for example, the tissue disruption on each region satisfies a corresponding target objective. Accordingly, the treatment planning may simulate the BBB treatment sequentially. In addition, the treatment-planning simulation may take into account the effect resulting from treating one target BBB region when simulating treatment of another target BBB region. For example, when one BBB target is upstream of another, disruption of the upstream target BBB region utilizing microbubble cavitation may reduce the concentration of microbubbles available at the downstream target BBB region, thereby changing the disruption effect thereof. Accordingly, treatment planning may simulate such effects based on the amount of microbubbles required for treating the upstream target BBB region and/or previously acquired image information (such as the locations of the two BBB target regions and their locations in the bloodstream). Alternatively, the sequential treatment may take another approach, in which the treatment effect on one target BBB region has to satisfy a desired target objective before another target BBB region is simulated. But again, the effect resulting from treatment of the prior target BBB region may be taken into account when simulating treatment of the subsequent target BBB region(s). In another embodiment, the multiple target BBB regions are treated substantially simultaneously. Steps106-120described above are performed to simulate and assess the openings of the target BBB regions. Treatment planning may determine the treatment profile parameters iteratively (beginning with initialized parameter settings), using simulations of the treatment and the predicted effect thereof to adjust the parameters in successive iterations until tissue disruption of each of the target BBB regions satisfies its corresponding target objective. In some treatment scenarios, the target BBB regions may include multiple types of tissue. The tissues of each type may be grouped together, and the treatment profile parameters, including the transducer parameter settings and/or the microbubble characteristic, may be adjusted to optimally treat each type of tissue (i.e., to achieve its target objectives). Again, different groups of tissue types may be sequentially or substantially simultaneously treated in accordance with the approaches described above. In an optional step124, the treatment planning simulation may computationally inject a dose of a therapeutic agent into the target tumor region for treatment. The administration profile of the therapeutic agent may be determined based on retrospective study of patients experiencing the same therapy. Additionally or alternatively, the ultrasound procedure may be performed in combination with other therapeutic methods, such as radiation therapy. For example, after the ultrasound-induced microbubble oscillation/cavitation disrupts vascular tissue in the target region, the radiation therapy may significantly reduce the radiation dose for producing efficient treatment at the target tumor. Again, based on the simulated tissue disruption effect and retrospective study of the patients experiencing the same therapy, the treatment planning simulation may determine the radiation dose for treatment. Accordingly, treatment planning in the present invention is, generally, an iterative process that may utilize testing of the simulated treatment plan at various stages. The planner may go back to adjust one or more previous treatment profile parameters (e.g., ultrasound settings and/or microbubble characteristics), continue with the next sonication (often determining the subsequent treatment profile parameters from the simulation results of the precedent treatment profile parameters), or even switch the planning to an entirely new target BBB region (e.g., return to step104, which allows the physician to select a target BBB region for planning). Once treatment planning for the target BBB region(s) is complete, the treatment plan may be presented to a physician. The physician may modify the plan, e.g., by changing the treatment order of the target BBB regions or indicating a target BBB region for which treatment planning is to be repeated. When the physician finds the treatment plan acceptable, actual treatment may be commenced automatically or manually in accordance with the plan. For example, the ultrasound transducer may be operated based on the settings of the sonication parameters (e.g., amplitudes, phases, directions, and/or time intervals between two series of sonications) determined in the simulation. Thus, while 1 second is a typical sonication suspension time between two consecutive series of sonications, during treatment planning it may be determined that a sonication suspension time of, e.g., 20 seconds is necessary after application of the second series so as to allow the microbubble concentration in the target region to be replenished (i.e., for the microbubbles to be delivered from a syringe to the target BBB region in the brain); this plan may be followed during treatment (i.e., the sonications are suspended for 20 seconds after the second series of sonications). During actual treatment, the microbubbles may be introduced intravenously or, in some cases, by injection proximate to the target BBB region using an administration system. Configurations of the administration system and one or more filters for selecting a desired size distribution of microbubbles and introducing the microbubbles into the target region may be found in U.S. Patent Application No. 62/597,076, the contents of which are incorporated herein by reference. In addition, other therapeutic methods, such as radiation therapy may be performed in combination with the ultrasound treatment based on the treatment plan. Approaches to combining the ultrasound and radiation therapy are provided, for example, in U.S. patent application Ser. No. 15/637,163, filed on Jun. 29, 2017, the contents of which are incorporated herein by reference. In some embodiments, the treatment effect (e.g., the size and/or degree of tissue disruption in the target BBB region) is monitored during execution of the treatment plan (e.g., by using the imaging device) on a patient. If discrepancies between the monitored treatment effect and the previously computed treatment effect are discovered, the treatment plan may be modified. Discrepancies may arise, for example, from inaccuracies in certain parameters of the physical model underlying the simulation. Accordingly, in various embodiments, the measurements taken during actual treatment are used as feedback to adjust the parameters (e.g., by fitting the parameters to the measurements). An updated treatment plan may then be created using the adjusted parameters. Adjustments may be made, first, to a parameter, or set of parameters, having a particularly high associated uncertainty (and which will therefore likely need adjustment) and/or which is known to affect the computed treatment effect greatly (i.e., a parameter to which the treatment effect is very sensitive, e.g., because the treatment effect is a higher-order rather than linear function of the parameter). For example, the acoustic absorption coefficient and microbubble size distribution at the target BBB region may be good candidates for parameter adjustments. If re-computation of the treatment effect based on adjustments to the initially selected parameter(s) does not satisfactorily decrease the discrepancy between observed and target values in the course of treatment, additional parameters may be changed. In some embodiments, the model parameters are ranked according to their uncertainties and/or the model's sensitivity thereto, and this ranking facilitates selection of one or more parameters for adjustment during treatment. Approaches to monitoring the treatment effect on the target BBB region and/or non-target regions in real-time during the ultrasound procedure are provided, for example, in U.S. Patent Application No. 62/597,073, filed on Dec. 11, 2017, the contents of which are incorporated herein by reference. In some embodiments, to the extent that parameters vary as functions of other space- and/or time-dependent quantities (e.g., the tissue type, which generally varies in space, or the temperature, which may change in time), the feedback may inherently encode information about such dependencies, e.g., in the form of spatial or temporal distributions of measured quantities. Parameter adjustment may also be based, at least partially, on human input, e.g., as provided by the physician monitoring treatment. Such human intervention may be assisted by intuitive visual representations of both predictions and measurements (e.g., in the form of boundaries indicating the tissue disruption effect of the target BBB region(s)). The displayed prediction may change dynamically in response to any user manipulation of parameter values. Parameter adjustments may be bounded by pre-set limits to prevent estimated values that are not physically realistic. In various circumstances, a straightforward adjustment of the existing treatment plan (i.e., an adjustment not requiring complete re-planning) may be carried out, e.g., by propagating the adjustment of the parameter(s) through the model during treatment. For example, if the deviation between the predicted and the measured treatment effect is within a clinically tolerable range, treatment of the currently targeted BBB region may continue, while subsequent planning stages for other target BBB regions may benefit from the feedback. While measurements of tissue disruption in the target BBB region are described above, the feedback provided during execution of the treatment plan is not limited thereto, but may also include acoustic, thermal or mechanical feedback and/or feedback derived from measurements through analysis and calculations (e.g., of the accumulated thermal dose or cumulative acoustic response dose). For example, the acoustic response emanating from the microbubbles may be detected using a detection device and/or the ultrasound transducer. The detected microbubble response may then be compared against the response predicted by the treatment planning. Approaches to measuring an instantaneous acoustic response level and a cumulative acoustic response dose are provided, for example, in International Application No. PCT/US18/33815, filed on May 22, 2018, the entire disclosure of which is incorporated herein by reference. In addition, approaches to configuring the transducer array for detecting the acoustic signals from the microbubbles are provided, for example, in U.S. Patent Application No. 62/681,282, filed on Jun. 6, 2018, the contents of which are incorporated herein by reference. Further, feedback received during treatment may include anatomical information and, importantly, information about any changes relative to the patient's anatomy as it existed at the time treatment was planned. Often, significant changes result from unavoidable patient motion during the treatment. Motion-tracking approaches may be employed to detect deformations and positional changes of relevant target or non-target regions, and facilitate adjustments to the treatment plan (e.g., via image-registration approaches) to compensate for such changes. Further, as movements and other changes are generally expected to occur during treatment (within certain limits), they may be taken into account by strategically planning the treatment, e.g., by specifying the order in which various regions are treated in a way that expected changes do not interfere with treatment or substantially increase treatment risk. Approaches to registering images acquired using two or more imaging systems are provided, for example, in U.S. Pat. No. 9,934,570, and approaches to tracking the motion of a treatment target or other objects of interest in an anatomical region of interest in real time during a treatment procedure are provided, for example, in U.S. Patent Publication No. 2017/0358095; the contents of these documents are incorporated herein by reference. FIG.2Aschematically depicts how the BBB prevents a therapeutic agent from reaching a patient's brain tissue. In various embodiments, upon acquisition of the images of the patient's brain (in step102), a treatment planner may communicate with the imaging device to upload the images. As set forth in the flow chart ofFIG.1, the treatment planner may identify one or more target BBB regions202. Optionally, the planner may computationally inject microbubbles204to the target BBB region(s)202during treatment simulation. Referring toFIG.2B, the planner may then computationally apply a series of sonications206to the microbubbles at the target BBB region(s). Based on the settings of the treatment profile parameters (e.g., microbubble characteristics and/or ultrasound parameter values) and the anatomic/material properties of intervening tissue located on the beam path between the transducer and the target BBB region, the planner may predict the acoustic response from the microbubbles204. Subsequently, the planner may predict the degree of tissue disruption in the target BBB regions based on the anatomic/material properties thereof and the predicted microbubble response. If the microbubble-enhancing ultrasound treatment is performed in combination with other therapeutic methods, such as targeted drug delivery, the treatment planner may computationally inject a dose of a therapeutic agent208and predict the response. Again, once treatment planning for the target BBB region(s) is complete, the actual treatment may be carried out in accordance with the plan. FIG.3Aschematically illustrates an exemplary system300for planning and executing focused ultrasound treatment as described above. The system300includes an ultrasound transducer302comprising a one-, two- or three-dimensional arrangement of transducer elements304, which may, e.g., be piezoelectric ceramic elements or piezo-composite elements. The transducer302may be curved (as shown) or planar, and may form a single surface, or include multiple discontiguous and, optionally, independently movable segments. The transducer elements304may be individually controllable, i.e., each element may be capable of emitting ultrasound waves at amplitudes and/or phases that are independent of the amplitudes and/or phases of the other transducer elements304. Alternatively, the elements304may be grouped, and each group may be controlled separately. Collectively, the transducer elements304form a “phased array” capable of steering the ultrasound beam in a desired direction, and moving it during a treatment session based on electronic control signals provided by a beam former306. The beam former306typically includes electronic control circuitry including amplifier and phase delay circuits for the transducer elements304. It may split a radio-frequency (RF) input signal, typically in the range from 0.1 MHz to 10 MHz, to provide a plurality of channels for driving the individual transducer elements304(or groups thereof) at the same frequency, but at different amplitudes and phases so that they collectively produce a focused ultrasound beam308. The system300may also include other treatment apparatus309, such as an administration device for administration a therapeutic agent to the target tumor regions or a radiation device. The therapeutic agent may include any drug that is suitable for treating a tumor. For example, for treating glioblastoma (GBM), the drug may include or consist of, e.g., one or more of Busulfan, Thiotepa, CCNU (lomustine), BCNU (carmustine), ACNU (nimustine), Temozolomide, Methotrexate, Topotecan, Cisplatin, Etoposide, Irinotecan/SN-38, Carboplatin, Doxorubicin, Vinblastine, Vincristine, Procarbazine, Paclitaxel, Fotemustine, Ifosfamide/4-Hydroxyifosfamide/aldoifosfamide, Bevacizumab, 5-Fluorouracil, Bleomycin, Hydroxyurea, Docetaxel, Cytarabine (cytosine arabinoside, ara-C)/ara-U, etc. In addition, for treating GBM, those skilled in the art can select a drug and a BBB opening regime optimized to enhance drug absorption across the BBB within patient safety constraints. In this regard, it is known that the BBB is actually already disrupted in the core of many tumors, allowing partial penetration of antitumor drugs; but the BBB is widely intact around the “brain adjacent to tumor” (BAT) region where invasive/escaping GBM cells can be found, and which cause tumor recurrence. Overcoming the BBB for better drug delivery within the tumor core and the BAT can be accomplished using ultrasound as described herein. The drugs employed have various degrees of toxicity and various penetration percentages through the BBB. An ideal drug has high cytotoxicity to the tumor and no BBB penetration (so that its absorption and cytotoxic effects can be confined to regions where the BBB is disrupted), low neurotoxicity (to avoid damage to the nervous system), and tolerable systemic toxicity (e.g., below a threshold) at the prescribed doses. The drug may be administered intravenously or, in some cases, by injection proximate to the tumor region. In various embodiments, the system300further includes an imaging device (e.g., an MRI apparatus or other imaging device)310that images (e.g., tomographically) a region of interest in the patient both prior to treatment for the purpose of treatment planning and during treatment for the purpose of guiding the ultrasound beam and monitoring treatment progress. In addition, the system300includes a computational facility312, in communication with the beam former306and the imaging device310, that facilitates treatment planning and adjustment. The computational facility312may be implemented in any suitable combination of hardware, software, firmware, or hardwiring; in the illustrated embodiment, it is provided by a suitably programmed general-purpose computer. The computer may include a central processing unit (CPU)314and system memory316, as well as, typically, one or more non-volatile mass storage devices318(such as one or more hard disks and/or optical storage units). The computer312further includes a bidirectional system bus320over which the CPU314, memory316, and storage devices318communicate with each other and with internal or external input/output devices, such as traditional user interface components322(including, e.g., a screen, a keyboard, and a mouse) as well as the beam former306and the imaging device310. The system memory316contains instructions, conceptually illustrated as a group of modules, that control the operation of CPU314and its interaction with the other hardware components. An operating system324directs the execution of low-level, basic system functions such as memory allocation, file management and operation of mass storage devices318. At a higher level, one or more service applications provide the computational functionality required for treatment planning and execution. For example, as illustrated, the system may include an image-processing module326for displaying, analyzing, and annotating images received from the imaging device310and a transducer control module328for computing the relative phases and amplitudes of the transducer elements304. Further, the system includes a treatment planner330that determines the sequence, locations, and treatment profile parameters of a series of sonications based on the processed images and user input; the resulting treatment plan may be used by the transducer controller328to determine the phase and amplitude settings and/or by the administration device309to determine the microbubble administration profile. Referring toFIG.3B, the treatment planner330may, itself, include a number of separate, but intercommunicating modules for performing the simulation steps and functions described above. For example, the treatment planner330may include an image-analysis module332for processing and analyzing image data received from the imaging device310and based thereon determine a target tumor region and/or a target BBB region in the images, an ultrasound-application module334for computing the settings of ultrasound parameters for generating a focal zone at the target BBB region, a microbubble-injection module336for determining desired microbubble characteristics (e.g., based on retrospective study of the patients experiencing microbubble-enhanced ultrasound procedure and/or using a physical model), an acoustic-response-prediction module338for predicting the acoustic response from the microbubbles based on the microbubble characteristics and ultrasound settings, a tissue-response-prediction module340for predicting the tissue disruption effect at the target BBB region and/or non-target region based on the predicted microbubble response and/or anatomic/material properties of the target/non-target tissue. In one implementation, the treatment planner330further includes an agent-injection module342for determining the administration profile of a therapeutic agent for treating the target tumor tissue. Alternatively, the treatment planner330may include a radiation-application module344for applying a radiation does to the target tumor tissue. The various modules utilize the techniques described above and may be programmed in any suitable programming language, including, without limitation, high-level languages such as C, C++, C#, Ada, Basic, Cobra, Fortran, Java, Lisp, Perl, Python, Ruby, or Object Pascal, or low-level assembly languages; in some embodiments, different modules are programmed in different languages. As will be readily understood by a person of skill in the art, the computational functionality required to carry out treatment-planning methods in accordance herewith may be organized (in software modules or otherwise) in many different ways, and the depicted embodiment inFIGS.3A and3Bis, therefore, not to be regarded as limiting. In general, the terms and expressions employed herein are used as terms and expressions of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described or portions thereof. In addition, having described certain embodiments of the invention, it will be apparent to those of ordinary skill in the art that other embodiments incorporating the concepts disclosed herein may be used without departing from the spirit and scope of the invention. Accordingly, the described embodiments are to be considered in all respects as only illustrative and not restrictive.
43,386
11857808
DETAILED DESCRIPTION OF THE PRESENT DISCLOSURE Treatment of cardiac arrhythmias and other diseases using an external radiation source for ablation of cardiac tissue may require taking into account motion (e.g. of dynamic tissues such as the heart or lungs) in order to ensure that the radiation is delivered to the appropriate location in the anatomy at the appropriate times and in the appropriate quantities. In some embodiments, catheters may be used to provide electrical information, or images, or location information of the heart during a treatment procedure. In various embodiments, focused photonic therapy can be accomplished without catheters, where imaging may be acquired prior to a radiation therapy treatment or images may be acquired during a treatment in order to provide guidance for the external radiation therapy system to accurately deliver therapy to the desired location in a highly focused and predicable manner. Appropriate real-time adjustment to cardiac respiratory and translational movement may be accomplished via imaging with phase contouring and gating of cardiac movement in the images to the energy source. External ablation and external mapping for correlation with ablation enable co-relating map and ablation efficacy so as to minimize collateral damage, as will be further discussed. “Gating,” sometimes referred to as triggering, is the process where a radiation treatment or an imaging system will deliver therapy or acquire images inside only a specified time window corresponding to a particular event or signal. For cardiac-gated radiation therapy, a radiation beam may only be turned on to treat a subject when the heart is in a particular phase of the cardiac cycle in order to ensure that the area being treated is in the same location for each treatment dose, or fraction. “Contouring” is the process of identifying and selecting a region or specific anatomy in an image. At a basic level, contouring may be outlining an organ in an image or a series of images to enable rapid identification of the organ. In a radiation therapy system, contouring may be used to follow critical targets trough a treatment cycle or fraction, minimizing or otherwise reducing the effect of motion on beam delivery For some clinical applications, the use of in-the-body catheters, either endocardial or epicardial, has not been sufficient to provide satisfactory results in all cases. Here, the use of multiple catheters such as both in endocardial and epicardial space, simultaneously or sequentially, has been tried with suboptimal success for ablation of mid-myocardial tissue. Implementations of the present disclosure not only can allow the spatial resolution in the beating heart to target cardiac and noncardiac tissue, but additionally phase contouring of the epi-f and endocardial surface separately allows for the targeting of specific regions, including hitherto inaccessible regions of cardiac tissue. In example implementations, an internal electrode, an injected electrode, a catheter-like element, or an injected source of catheter or electrode-like particles including magnetic and ionizable micro and nanoparticles at the target site, may optionally be used to maximize the effects of external radiation. Injectable electrodes may be used so that the particulate beam can stimulate more reliably the area of injection, for example, into the skeletal muscle or subcutaneous patch and, in addition, to focus thermal injury to desired structures preferentially rather than the noninjected site. Small/nanoparticulate ionizable and possibly metallic injectates can be used for a similar purpose, such as into the arterial circulation, to focus energy delivery into the myocardium rather than surrounding structures and the blood pool where coagulation may occur. That is, although the disclosed approaches can be used, for example, for in-heart catheterless ablation of targeted tissue, some implementations of the disclosure may use, for example, adjunctive catheters with circuitry and electromagnetic navigation tied into the energy delivery source to maximize cardiac registration and local therapy for some applications, such as for remodeling neural tissue that may be in close proximity to sensitive vasculature or conduction tissue. Percutaneous catheters are presently the standard standalone method for cardiac mapping and ablation (see Asirvatham S J. Advances in catheter ablation: a burning trail! Indian Heart Journal. 2011, 63(4):379-385. Suleiman M, Brady P A, Asirvatham S J, Friedman P A, Munger T M. The noncoronary cusp as a site for successful ablation of accessory pathways: Electrogram characteristics in three cases. J Cardiovasc Electrophsiol. 2010, 22:203-209). In example implementations, the combined use of percutaneous, pericardial, subdural, per venous, and per subcutaneous placement of electrodes for sensing, stimulation, and focusing energy delivery lies in the simultaneous and concurrent use of external beam radiation at the time of stimulation and mapping. Thus, total energy delivered may be optimized by exact knowledge of termination of arrhythmia and detailed three-dimensional electroanatomic maps, which are in real time tagged via the synchronized electrical trigger between these two systems so as to deliver energy at the exact site and for the optimal duration to treat the pathological substrate. (Background may be found in: Del Carpio Munoz F; Buescher T L; Asirvatham S J. Three-dimensional mapping of cardiac arrhythmias: what do the colors really mean? Circ Arrhythm Electrophysiol. 2010 December; 3(6)e6-e11.) As will now be further discussed, when the target (such as the heart) is in motion, contouring may be used to follow critical targets trough a treatment cycle, minimizing or otherwise reducing the effect of motion on beam delivery. Cyclical patterns of motion may be used to aid the targeting of cardiac tissue to be ablated, and avoidance of critical surrounding tissue to be left untreated. Also to be discussed is identifying and minimizing the entrance effect of leading edge Bragg peaks, minimizing risk to organs from parcel delivery, using phase tools for phase analysis, establishing the acute endpoint of hadron therapy delivery, denitrifying non-cardiac targets for particle therapy (including, but not limited to, seizures, left atrial appendage (LAA) occlusion, treatment of renal artery nerves causing hypertension, and creating antibodies and other molecular targets that can be activated using a particle beam to enhance effects with tissue activation instead of just tissue destruction). Referring toFIG.20, one configuration for compensation of cardiac motion to ensure precise targeting may take the form of using anatomic landmarks during imaging. In one example, 10 landmarks may be tracked for a heart, including 5 in the left atrium and 5 in the left ventricle. The left ventricle, left atrium, and left atrial appendage may be segmented at each phase of the cardiac cycle using a 3D volume segmentation tool, such as in the Analyze 12.0 software, and time-volume curves may be computed. Ten anatomic landmarks distributed across the left ventricle and the left atrium may be identified across phases of the cardiac cycle. In the left ventricle, endocardial locations near the anterior papillary muscle (APM)2000, posterior papillary muscle (PPM)2010, left ventricular apex (LVA)2020, mitral valve on left side (LVMV)2030, and left aortic valve (LVAV)2040may be identified; in the left atrium, endocardial locations near the mitral valve (LAMV)2050, the left atrial appendage (LAA)2060, left superior pulmonary vein (LSPV)2070, right superior pulmonary vein (RSPV)2080, and inferior pulmonary vein (IPV)2090may be identified as shown inFIG.20. Landmarks may be distributed across the chambers and may be chosen such that they could be reliably identified across all phases of the cardiac cycle. Motion trajectories may be computed using curve smoothing followed by a 3D curve spline fitting algorithm. In addition, the maximum displacement in each of the x, y, and z directions may be computed for each landmark. Referring toFIGS.21A,21B, and21C, plots of 3D curve trajectories for the 10 tracked anatomic landmarks shown inFIG.20are depicted for 3 example hearts with the left ventricular landmarks2010and the left atrial landmarks2020shown. Referring toFIGS.22A,22B,22C and22D, a close up view of the 3D trajectories fromFIGS.21A,21B, and21Care shown for 4 individual landmarks.FIG.22Adepicts an APM,FIG.22Bdepicts a PPM,FIG.22Cdepicts a LAA, andFIG.22Ddepicts an IPV. The figures indicate that a significant variation in motion trajectories exists across the various anatomic landmarks. In some configurations, the left ventricular landmarks demonstrate a larger magnitude of motion than those in the left atrium. In some configurations, 3D cardiac motion across the left atrium and left ventricle of the heart may be quantified using multi-phase computed tomography datasets. Since there is the possibility for significant variation in 3D motion trajectories across different anatomic locations, detailed motion models are necessary for precise targeting of cardiac structures in external beam ablation therapy. In one example in the left atrium, total displacement was on the order of 5 to 6 mm in each of the x, y, and z dimensions. Left atrial thickness may range from 1.9 to 3.1 mm. Cardiac motion will need to be at least partially compensated in order for an external beam ablation approach to accurately target the left atrium. While the left ventricle is thicker, ranging from 0.9 to 1.5 cm between end systole and end diastole, its motion displacement may also larger, such as ranging from approximately 7 mm in the x direction and z direction to almost 10 mm in the y direction. Motion compensation may also be needed in the left ventricle in order to avoid collateral damage to surrounding tissue. Motion analysis may be valuable for quantification of cardiac motion as well as serving as a ground truth dataset for the validation of computational motion models. In various embodiments, phase difference structural contouring provides optimal targeting with the particle beam, minimizes collateral damage, and serves as feedback for energy delivery. In certain embodiments, the phase contouring itself may be done in two steps. In the first step, pre-procedural imaging (CT, MRI, PET, etc.) and use of ultrasound to tag specific tissues or structures based on its imaging, refractory, diffraction, and scatter characteristics along with its movement. This provides for tissue identification and labeling. Thus, rather than imaging an organ per se, specific structures with imaging and motion characteristics are identified. If we refer to such characterized structures as tissue time domains (TTDs), then these TTDs may be in a specific organ, across organs, or just part of a specific organ. In certain iterations, multiple imaging sources, including those listed above, may be used to achieve successful phase contouring. First, sequential images may be obtained throughout a cardiac cycle and tagged to phases of the electrocardiogram such as the p-wave when present, peak QRS complex in multiple leads, and QRS and t-wave in multiple leads. Specific cardiac structures with unique and differentiating movement with the cardiac cycle may then be tracked in by a motion sensing algorithm. To do this, the aortic, pulmonary, mitral, and tricuspid valve tip and endocardial base, endocardial apex, epicardial coronary artery and veins, epicardial base, epicardial apex, pulmonary veins, tip and base of atrial appendage, lateral and medial extents of vena cavae, and coronary sinus ostium may be labeled and movement tracked through the cardiac cycle. Machine learning may then be facilitated by the algorithm by inputting multiple cardiac cycles where the electrocardiogram is used as a reference and changes from one cardiac cycle through the other are either used to reject a particular cardiac cycle or to correct for the labeled moving part to differentiate it from noise or artifact. This complete endo, epi, and valve tissue contouring can provide precise input and real-time tracking of the photonic beam and other external beam source to allow effective energy delivery to the targeted tissue. Manual tissue labeling as well as automated tissue labeling may be used as part of this process (seeFIGS.11A,11B). Clinicians may manually label tissue based on its appearance, movement (valve versus myocardium), and tissue characteristics (e.g., reflectivity on ultrasound, absorption characteristics on MRI or CT scan, etc.). In various instances, therefore, the first step involves imaging and tissue identification including labeling. The second step may involves analysis through a cardiac cycle from beat to beat using an index beat as template and correcting for outliers (not fitting the contour or movement from the index beat or beats). The disclosed approaches enable tagging similar movement of the particle beam delivery tool in a manner superior to simply tagging with an electrical event alone such as the EKG or whole organ movement. This is at least in part due to the fact that whole organs can have complex movement including twisting, translational movement, and transferred movement, along with random movement such as a ruptured chord. Such TTD contouring can also minimize collateral damage since dramatically different contouring would be seen, for example in the lungs, the ascending aorta, or the esophagus. TTD contouring also helps identify abnormal and arrhythmogenic tissue by assessing differences in contours within a specific chamber myocardium despite similar electrical activation, and conversely, similar contouring with diverse electrical activation pattern as evidenced by EKG vector analysis. In various embodiments, contouring can provide important feedback information for titration energy delivery. In other words, a contour identified at baseline and deemed arrhythmogenic would change based on differences in registered time and spatial points as a result of successful energy delivery. When such differences exceed preset parameters (such as by 50%), energy delivery may be automatically stopped. Tracking of the contour may be done by any or a combination of observable recorded parameters including the electrocardiographic vector, ultrasound-based distance of a particular structure to the surface of the body, computerized tomograms, impedance changes with an integrated circuitry with a vest of specifically spaced and circumferential electrodes around the organ of interest for energy delivery. These signals are digital, and following appropriate filtering, are fed into the circuitry that allows energy delivery in the accelerator and the accelerator's own focusing mechanism (direction and depth). When, for example, contour change in movement in one direction in three-dimensional space is noted, automatic shifting of the focus and depth of energy beam is accomplished. There may be a learning period with simulated energy delivery over several cardiac cycles prior to actual treatment with a self-learning algorithm when errors in simulated point of energy delivery/focus has been detected when compared to the real time position of the cardiac and other organ contour. With or without phase contouring, in various embodiments, simultaneous single or multiparticle energy delivery may be used so as to maximize and optimize or otherwise enhance Bragg peak effects of each, and in turn minimize or otherwise reduce unwanted entrance effects and disperse of lesions. These effects along with phase differences may be accentuated with additional administered agents such as contrast microbubbles, calcium chloride infusion, varying infusion rates and salinity of sodium chloride infusion, skin and superficial emollients as well as implanted devices that may be gels or pericardial emollients. Such additions may improve visualization, TTD differences, and create secondary electrical effects that in turn may ablate tissue as a result of activation of the primary particle beam. These agents may also be inhaled so as to better define lung contours to avoid collateral damage when the heart is the target or maximize differences between tumor and normal tissue when lung tissues are the target. Regarding enhancing Bragg peak effects and reducing unwanted entrance effects, the specificity of the Bragg peak, along with the exactness of the corrected and finalized cardiac contour, allow graded single and multisite energy delivery. Low-dose delivery can be used to induce perturbations in the cardiac contour such as by inducing ectopic (extra) beats. These induced beats' contour will be different with respect to the template obtained over several beats and specific for a region of stimulation. For example, electrocardiographic leads II, III, and aVF will be positive when the test single particle is delivered to the cardiac outflow tracts, etc. A second energy beam or multiparticle energy beam may then be utilized to again test for site of application. The resulting change in contour and electrocardiography will then be matched for templates, and if the area and volumes described differ by less than 5% or similar value, for example, then both beams are considered to be guided to a similar location and additive particulate delivery at low dose so as to further minimize entry effects and collateral damage may be used. Further, one beam may be used to stimulate the tissue while the other to ablate with the inability to stimulate from the first beam being used as an endpoint for energy delivery from the second. In some instances, despite the Bragg peak based specificity for site of energy delivery paired with the disclosed tagged contouring described in this document, extreme proximity with sensitive structures may preclude safe energy delivery in certain implementations. In such cases, a stimulating beam is first employed to allow titration of energy delivery and to know an endpoint when the tissue being targeted has been ablated. Similarly, injected or implanted temperature/impedance or thermal map detecting sensors are placed within, at, near, or in a visualizable vantage position for a sensitive structure such as the esophagus or coronary artery. Multiparticle beams are focused on the structure of interest when one beam at a given angle with anticipated depth, etc., creates a penumbra lesion where one of the above mentioned sensors detects potential collateral damage yet based on the stimulatory beam, the site requiring ablation has not yet been completely ameliorated. Then, lower energy with two or more beams focused on that structure is used, and the process repeated at lower and lower energies and more and more multiparticle beam sources until the penumbra volume for thermal injury is minimized and successful ablation has occurred. An example process is depicted inFIG.19. Notably, the above test and eventual delivery and contouring may include contouring of thoracic structures during respiratory movement, the esophagus during peristalsis, major blood vessels during systole and diastole, and cross, sagittal, coronal, and long axis imaging (CT, MIBG, MRI, etc.) views through the cardiorespiratory cycles may be continuously validated against each other and composites against an initially established template with any change beyond a manually changeable acceptable error such as 5-10% at noncritical sites, for example, renal autonomics or 0.1-1% near the cardiac epicardial arteries, etc. To maximize or otherwise enhance efficiency, a new set of tools that include table, armrests, bellows, and intravascular or intra-viscus mirroring reflecting or focusing tools may be used in certain implementations. Existing tools to house patients when being treated surgically or interventionally may not be suitable for certain implementations of the disclosed therapy techniques. Pivot points, angles of movement, and relative position such as if the arms to the head or the body are fixed allowing free movement at varying and programmable positions would not only be ergonomically ideal for static patient therapy delivery modalities but may allow a programmed body phase contouring that negates the effect of a particular tissue's time dependent contouring and thus create a relatively static piece of targeted pathological tissue. The techniques and approaches discussed above are also applicable to “static” organs. For example, to treat seizures, pulsatility of the brain per se may be minimal, but the abrupt phase change in pulsatility for the brain's blood vessels, particularly the arteries, would be important to define to prevent damage to these structures when treating brain tumors or seizure substrate/foci. Similarly, for renal denervation or other denervation, the artery and vein will serve as localizing phase contours to know where the nerves are located and energy delivery kept to the para phrase of the pulsating contour to avoid intraluminal damage. It is noted that static organs are not entirely static; for example, internal brain structures pulsate with a different vector loop because of cerebrospinal fluid (CSF) flow rather than the external subdural structures. Phase contouring using either an external electro and cephalogram signal, carotid pulse wave, or cardiac electrogram or combinations of these may be used to create a multi-cycle contour of different brain parts that serve as an electronic trigger to move, in real-time, the beam source to enhance temporal resolution for a given spatial resolution, and enhance the spatial resolution for a given time point. In other embodiments, a unique diagnostic and stimulatory system based on particle delivery patterns may be implemented. The mechanism for ablation and destruction of pathological substrate inherent in iterations and embodiments described above occurs as a result of the local effects of particle bombardment and transference of energy including to thermal energy. When done in specific pulsed sequences, stimulation rather than destruction would occur serving as a diagnostic tool akin to intravascular electrophysiology study or intracranial epilepsy induction with simultaneous delivery potentially from a bifurcated but focal source of two different particle delivery patterns. Stimulation may continue to occur as destruction is planned with failure to stimulate at a particular output or frequency serving as an endpoint for discontinued tissue viability and thus the absence of need for continued destructive particle therapy. Specific patterns for specific patients and clinical applications may be required in certain implementations. The input may be from the beating heart or equivalent contour, specific location of the arrhythmogenic substrate, and critical structures that have been imaged and tagged throughout the cycle that need to be avoided for collateral damage. Further, in various embodiments, particle beam therapy patterns and algorithmic delivery may be used to promote tissue revascularization, iontophoresis-like tissue uptake of chemical agents including drugs, and delivery of biological therapies such as vector-based biological agents. Combined biological and cell therapy delivery tools that may be intra-body along with extra corporal beam therapy are also envisioned to promote and maintain tissue uptake of the biological agent. Example 1: Atrioventricular Ablation In one non-limiting example, a study was performed that demonstrates the superior results delivered by the disclosed systems and methods that were not achieved using conventional practices. This study sought to ablate the atrioventricular junction completely noninvasively, using a single-fraction, image-guided application of photon beams in an intact porcine model. The study showed that intensity-modulated radiation therapy can be relatively precisely focused to the atrioventricular junction to noninvasively achieve complete atrioventricular block despite cardiac and respiratory motion. Complete atrioventricular block can be achieved with relatively small x-ray doses, with increasing dose increasing lesion size. Methods/Study Design: Ten domestic healthy pigs (Sus scrofadomestica) of either sex were included at 10 weeks of age and randomized to irradiation of the atrioventricular junction with doses of 25, 40, 50, and 55 Gy. Anesthesia and Monitoring During Surgical Procedures: Anesthesia was induced using an IM dose of telazol (4.4 mg/kg), ketamine (2.2 mg/kg), and xylazine (2.2 mg/kg). After intubation, animals were ventilated on 1% to 3% isoflurane and monitored using 4 surface ECG electrodes, invasive blood pressure, temperature, and SpO2. Sedation and Positioning During Computed Tomographic Imaging and Photon Irradiation: During cardiac imaging and photon beam irradiation, animals were sedated using a continuous IV drip of propofol (10 mg/mL; 0.25-0.30 mg·kg−1·min−1) without additional paralytic use. Animals were immobilized using a vacuum cushion (BodyFIX BlueBAG; ElektaAB, Stockholm, Sweden) to ensure a stable, reproducible position for computed tomographic (CT) imaging and radiation therapy delivery. The CT reference point (CT laser system) was marked on the skin and on the cushion. Specific Methods: Specific methods, including electrophysiological study and treatment planning CT acquisition, were conducted using carbon ion (12C) beams (as recently described in Lehmann H I et al, Feasibility study on cardiac arrhythmia ablation using high-energy heavy ion beams. Sci Rep. 2016; 6:38895. doi: 10.1038/srep38895.15). Baseline Study and Electrophysiological Evaluation: The surgical field was shaved and prepped with povidone-iodine solution. A cut-down with subsequent vessel preparation for placement of introducer sheaths in the left/right external jugular vein and right/left femoral arteries and veins was performed. For intracardiac echocardiography, a 10F 5.5 to 10 MHz probe was used (Acuson; Cypress, Mountain View, CA). A 7F decapolar catheter was placed in the coronary sinus. Catheterization was performed under biplane fluoroscopic guidance. Electroanatomical mapping was performed (Carto XP, Biosense Webster, Inc, Diamond Bar, CA). A Navistar or Navistar-Thermocool mapping catheter was used (Biosense Webster). For each chamber, ≈2200 points were sampled, and a fill-threshold <15 mm was considered as adequate to reflect a high-density map. Bipolar signals were recorded between the distal electrode pairs. Signals were displayed and recorded using a digital amplifying and recording system (CardioLab Electrophysiology Recording System, GE Healthcare). Left ventricular function was assessed using left ventricular ventriculography and intracardiac echocardiography. Intracardiac fiducials were implanted at the coronary sinus ostium, right atrial appendage, and left atrial appendage for biplane x-ray and cone beam CT positioning before irradiation (Quick Clip 2; 8×2 mm; Olympus, Shinjuku, Japan). Pacemaker Implantation: All animals underwent pacemaker implantation at the end of the baseline electrophysiological evaluation. After removal of the sheath from the external jugular vein, two 7F active fixation pacing leads were introduced through 2 small incisions in the vessel wall. Atrial leads were placed in the right atrial appendage, and right ventricular leads were placed in the right ventricular apex. Leads were tunneled and connected to a pacemaker unit placed in a subcutaneous postauricular pocket (Medtronic, Inc, Minneapolis, MN). Treatment Planning CT Acquisition: Cardiac-gated native and contrast-enhanced CT scans were acquired for photon beam treatment planning on a 64 row Siemens Somatom Definition Flash scanner (Siemens Healthcare, Forchheim, Germany). Contrast-enhanced scans were obtained after injection of 50 mL contrast agent (4 mL/s; 8-10 seconds delay; Omnipaque 350 mg I/mL; GE Healthcare) through a cannula in a branch of the caudal auricular vein. All scans were acquired at expiration using a pause of the respirator. Ten cardiac phases with 1 mm voxel and slice spacing were reconstructed with an enhanced field of view of 400 mm for skin-to-skin images to be used for radiotherapy planning. Contouring and IMRT Treatment Planning: A sphere of 5 mm diameter was contoured as atrioventricular junction ablation lesion on all 10 cardiac phases. The average contour position was subsequently transferred into the phase-averaged CT scan that was used for all subsequent treatment planning steps. Organs at risk for beam delivery were contoured on the averaged CT as well. All treatment planning was conducted using Eclipse (Varian Medical, Palo Alto, CA) treatment planning software. Cardiac motion was incorporated by anisotropic expansion of the target (±1 mm left-right, ±4 mm superior-inferior, and ±4 mm anterior-posterior). In addition, a margin of ±4 mm was added for positional uncertainty and residual respiratory motion. All treatment plans were computed using 2 or 3 arcs. Dose restrictions from single-fraction x-ray deliveries were used for treatment plan computation; restrictions to coronary arteries were included into the dose optimization process. Animal Repositioning and Photon Irradiation of the Atrioventricular Junction: At the time of treatment, animals were initially aligned in the BodyFIX bag using an in-room laser system and skin markings. Subsequently, isocenter position was refined using matching of bony anatomy in 2 digitally reconstructed radiographs derived from the CT scan compared with 2 orthogonal in-room x-ray images. The match was finalized using position of the CS ostium fiducial clip on in-room (cone beam) CT, conducted during expiration and inherently averaged during the cardiac cycle. Beam delivery of 6 MV photons was gated to expiration and was performed using a linear accelerator (True Beam; Varian Medical). Follow-Up After Irradiation: Animals were followed for weeks after irradiation. Device interrogations were performed after 4, 8, and 12 weeks and at termination of follow-up where the animals also underwent a procedure identical to the one conducted at baseline as described above. Animals were euthanized through induction of ventricular fibrillation directly followed by exsanguination. Pathological Examination: Heart, lungs, trachea, phrenic nerves, and esophagus were removed en bloc with the pericardium intact. Triphenyltetrazolium chloride (Sigma Aldrich, St Louis, MO) was used to delineate the ablation lesions. Gross pathological findings were assessed, and all macroscopically visible lesion dimensions were measured on the endocardial surface in the nonfixed tissue. Lesion volumes were calculated as described in infarcted tissue. Histological Examination: For histological analysis, samples were fixed in 10% formaldehyde and processed. After fixation, samples were wax embedded and cut with a microtome. Cut sections (5 μm) were stained with hematoxylin and eosin and Masson trichrome staining and evaluated using light microscopy. Statistical Analysis: All statistical analyses were performed using SPSS 18. Baseline characteristics in Table 1 are depicted as mean±SD. Treatment planning data in Table 2 is depicted per individual case. Spearman correlation was used for bivariate correlations between the administered dose, the lesion area in electroanatomical mapping, and the calculated lesion volume. Isodose lines were correlated with electroanatomical lesion findings and macro and microscopic lesion outcomes. Median time to complete atrioventricular block was estimated using the Kaplan-Meier estimation model, treating the animal that died prematurely as censored observation. A P value <0.05 was used as cutoff value to indicate statistical significance. TABLE 1Table 1: Baseline and Follow-Up Characteristics of All 10 Animals Included Into the AnalysisShamAVJ 40AVJ 50AVJ 55All PigsControlAVJ 25 GyGyGyGy(n-10)(n = 3)(n = 2)(n = 2)(n = 1)(n = 2)Mean weight at32.02 ± 3.632.5 ± 4.631 ± 334 ± 22830.4 ± 0.4imaging, kgMean weight at32.5 ± 3.8—32 ± 433 ± 22931.4 ± 0.4irradiation, kgMean duration of124.8 ± 30.818.7 ± 5.6111125 ± 082138 ± 13follow-up, dMean time from CT to4.3 ± 1.6—6 ± 15 ± 032.5 ± 0.5irradiation, dTarget contour0.5—0.50.50.50.5diameter (CTV), cmVolume receiving target2.5 ± 0.5—2.8 ± 0.22.0 ± 0.41.92.8 ± 0.1dose, mLSetup time (first image33.0 ± 11.7—36.0 ± 15.824.3 ± 0.949.430.6 ± 1.8to beam), minIrradiation time (beam17.2 ± 6.3—9.9 ± 0.514.7 ± 2.019.925.7 ± 0.3on to beam off)Total procedure time50.2 ± 13.5—45.9 ± 16.239.0 ± 2.969.356.3 ± 2.1 TABLE 2Table 2: Resulting Mean Doses to Organs at Risk From TreatmentPlanning for Atrioventricular Junction Ablation: Doses arestated for all organs at risk. Only the coronary arterieshad to be included into the beam and dose optimization process.Included are only treated, but not sham-animals. In the 50Gy case, a less strict threshold was applied for protectionof the coronary arteries from dose. LCA indicates contourencasing the left anterior descending and the circumflex coronaryarteries; and RCA, right coronary artery.MaximumCaseDose,Dose inLCA,RCA,Trachea,Skin,Esophagus,No.GyTargetGyGyGyGyGy15560.76.86.014.113.412.625560.47.16.515.312.711.435053.59.59.09.29.77.744045.74.74.311.110.39.654044.84.64.013.08.39.862528.92.72.37.04.85.272529.22.72.37.95.86.3 Results/General Characteristics: Out of 10 animals, 2 animals were treated with a prescription dose of 55 Gy, 1 animal received 50 Gy, 2 animals received 40 Gy, and 2 animals were treated with 25 Gy. General characteristics of all animals are shown in Table 1. The mean animal weight at baseline was 31.7±2.7 kg. The mean follow-up duration was 120.7±7 days. The mean weight gain during the course of the follow-up was 61.1±5.2 kg. The mean left ventricular ejection fraction at baseline was 70±5%. Contouring and Treatment-Planning Outcomes:FIG.1depicts contouring outcomes used for subsequent treatment plan computation, including the target as well as cardiac and surrounding risk structures. The atrioventricular junction ablation lesion was contoured in the superior portion of the triangle of Koch. The mean volume receiving the prescription dose for atrioventricular junction ablation was 2.5±0.5 mL (including blood; Table 1) after target motion and tissue deformation was included. The maximal point doses per individual case to the coronary arteries, esophagus, trachea, and skin are depicted in Table 2.FIG.2shows 3 actual treatment-planning outcomes for delivery of 55, 40, and 25 Gy to the atrioventricular junction in 3 planes. Restriction of the maximal allowed point dose to the coronary arteries led to a dose distribution that did not have perfect conformity with the target volume, producing relatively high doses anterior to the target volume. Photon Beam Delivery: The mean irradiation time for all groups was 14.3±2.8 minutes (Table 1). Beam delivery for all animals was gated to the expiration phase of the respiratory cycle with a mean duty cycle of 60%. Electrophysiology and Outcomes After Irradiation: The median time until complete atrioventricular block occurrence was 11.2 weeks (SE: 0.490) post-irradiation and developed in 6 out of 7 animals (86%; 1 animal [25 Gy] died prematurely of device-related infection and could not be evaluated in a similar fashion). For in vivo characterization of the lesion size that led to atrioventricular block, electroanatomical mapping was conducted. Results of electroanatomical mapping are shown inFIG.3. The size of the endocardial surface area without electrogram positively correlated to the administered dose (rs=0.971; P=0.001;FIGS.3and4). Complete atrioventricular block was persistent in all animals; in case of the animal treated with 25 Gy, block occurred during the follow-up study of this animal during mapping of the atrioventricular junction. Macroscopic Lesion Outcomes and Correlation to Dose: The positive correlation of macroscopic lesion outcomes with the mapped area and the administered target dose is shown inFIG.4. Bivariate analysis revealed a positive correlation of rs=0.971; P=0.001, for the calculated macroscopic lesion volume and administered dose. An exemplary macroscopic lesion, consisting of macroscopic visible fibrosis in the right atrial target region is shown inFIG.5A. In addition, isodose line extension led to lesion development in the septal left atrium (FIG.5B). The mean right atrial lesion volume on pathological analysis for all dose groups was 3.8±1.1 mL. The mean right atrial lesion volume in the 55 Gy group was 5.1±2.9 mL. The mean right atrial lesion volume in 40 Gy was 3.0±1.0 mL and in 25 Gy was 2.6 mL. In case of 55 and 40 Gy animals, concordantly to the treatment-planning outcomes, lesions extended anteriorly into the right ventricle and interventricular septum. The mean maximal width of lesion extension into right ventricular myocardium was 17.2±9.1 mm. Lesion Histology/Target Histology: Target tissue analyzed after 3 months of follow-up revealed dense fibrosis, present in the target tissue in all animals of all dose groups (FIGS.5C and5D). Similarly and consistent with macroscopic pathology, fibrosis extended anteriorly to the contoured area into the interventricular septum in all 3 dose groups. Short-Term Toxicity: No collateral damage was observed in the esophagus, trachea, or other organs at risk. The myocardium of the coronary sinus was also spared in all cases. Coronary arteries did not show a reaction within 3 months of follow-up. No radiation-induced side effects were observed during 4 months of follow-up. The left ventricular ejection fraction did not change during follow-up between sham and irradiated animals (Table 2). Discussion/Main Findings: In this study, we ablated the atrioventricular junction catheter-free using a 6 MV photon beam. Doses of 25 to 55 Gy created lesions that subsequently led to complete atrioventricular conduction block. Point doses to the coronary arteries were optimized to stay <10 Gy, and accordingly, ablation lesions were not fully target conformal. Lesion volumes positively correlated with isodose line spread around the target volume and increased with the administered target dose, despite the use of the same targeting margins in each dose group. Targeted tissue revealed dense fibrosis. Fibrosis was not present in myocardium of beam entry channels, however, histology revealed evidence of cardiomyocyte apoptosis in these areas. External Photon Beam Radiation for Catheter-Free Ablation: In these presented chronic intact animal studies, photon beams could be appropriately focused for atrioventricular node ablation. Similar to our data with carbon ions (12C), reliable ablation was achieved with 40 Gy. This study illustrates the biophysics of photon beams; the ultimate lesion size will depend on the irradiated target volumes, that is, the target dose and optimization constraints that will shape the dose distribution. Previous studies using the CyberKnife photon accelerator indicated that a dose as low as 25 Gy of photons may create an electrophysiological effect. Our here-presented data support this finding for the here-irradiated volume, in which 25 Gy caused a lesion. The time frame for development of atrioventricular block in this study was similar to the CyberKnife studies and faster than what we have observed with12C beams. Irradiation of a Moving Target With External Photon Beams: Even though photon beams are robust in the presence of target motion, to guarantee dose delivery in the presence of contractile target motion, the approach used in this study was to expand the target volume to cover the whole amplitude of contractile motion, a method used for the treatment of mobile tumors in radiation oncology. This conservative approach was chosen to ensure full coverage of the target with the prescription dose, thus allowing investigation of the required dose to achieve the desired ablation effect in the respective target volume. Other techniques, discussed below in the context of other implementations, allow for, for example, gating of the photon beam to the ECG to decrease the required irradiation margin size. Respiratory motion could already be well mitigated with an acceptable efficiency by using gating of the beam to the expiration phase of the respiratory cycle. Photon Beams Versus Particle Beam Sources: This study illustrates how sparing of risk structures (e.g., coronary arteries) is possible using photon beams, but how this also leads to higher doses at another location, explaining the observed anterior lesion extension into the interventricular septum. In this study, the volume irradiated with high and low doses of photons is larger than that in our study using12C particle beams. This translated into not only a greater lesion size but also greater involvement of myocardium located in the beam entry channels. This is because of the different physical properties of these 2 energy sources and the chosen beam arrangements. In photon beam radiation therapy, multiple beam angles are used to concentrate dose in the target region where the beams overlap and distribute the entry and exit dose of beam, leading to a larger myocardial volume receiving low-dose radiation. For the plans in this study, each arc comprised 178 distinct photon beams. Longer-term follow-up times after irradiation will reveal long-term effects for lesions creation and of exposure of these larger myocardial volumes in comparison to the different forms of particle therapies (H+,12C,4+He). Clinical Implications: Adjusting for the differences in anatomy and position of risk structures in the porcine heart as compared to humans, and adjusting doses, which are dependent on the finally irradiated myocardial volume and the irradiated myocardial location, the implementation used in this study is applicable to, for example, cardiac arrhythmia ablation in humans. Arrhythmia ablation without the use of catheters has pertinent clinical implications. After we performed these initial atrioventricular node ablation studies, we have successfully conducted deliveries for pulmonary vein isolation and ventricular myocardial irradiation in the nonarrhythmic animal model. Success rate of catheter ablation in both diseases is still limited, driving our investigations with photon and particle beam therapies. The physical properties of photon beams could make these beams an attractive energy source for ablation whenever larger, deeply situated myocardial volumes are treated that do not require extremely sharp energy fall-off and that can neither be reached from the endo- or epicardial surfaces. This is the first systematic study using several doses of external photon beam therapy for atrioventricular node ablation in intact animals. Using this respective target volume, doses as low as 25 Gy caused electrophysiological and structural myocardial ablation effects. Doses ≥40 Gy created reliable ablation with interruption of cardiac impulse propagation. As discussed above, this study illustrates certain implementations in certain embodiments and does not limit other implementations of these and other embodiments. Example 2: Treatment of Cardiac Arrhythmias In another non-limiting example, another study was performed that demonstrates the extension of 4D treatment dose reconstructions to cardiac motion for ion beam ablation of cardiac arrhythmias in an animal model. Materials and Methods/Animal cohort: The animal numbering is identical in both publications. An overview of the animal cohort is given in Table 3 (Animal cohort with target (AV: atrioventricular node, LV: left ventricle, PV: pulmonary vein isolation) and dose groups used for the ion-beam ablation study at GSI. The pigs included in the dose reconstruction analysis are marked bold-faced.) Animals received carbon ion beam treatment to three different target areas: (1) the atrioventricular junction (AV), (2) the left ventricular free wall (LV), and (3) the junction of the left atrium and the pulmonary veins (PV). For the AV, different target doses were used to study dose-effect relations. For the purpose of this study, the targets differ mainly in size and position, leading to different nearby OARs and to slightly different motion. TABLE 3Table 3: Animal cohort with target (AV: atrioventricular node,LV: left ventricle, PV: pulmonary vein isolation) and dose groupsused for the ion-beam ablation study at GSI. The pigs includedin the dose reconstruction analysis are marked bold-faced.animaltargetdose [Gy]TV [cm3]PTV [cm3]1AV550.11.82AV550.11.73AV550.11.74AV400.11.75AV400.11.86AV400.11.87AV250.11.78AV250.11.712PV401.316.113PV400.911.114PV301.012.615LV402.116LV402.317LV402.4 Treatment planning and delivery: Briefly, both imaging and irradiation were performed using a custom-built immobilization device and enforced breath-holds of up to 60 sec to suppress respiratory motion. CT data for treatment planning was acquired for all animals using a Siemens Biograph mCT (Siemens Healthcare, Erlangen, Germany). For each animal, a surface ECG triggered, contrast-enhanced (CE) and non-contrast enhanced 4D-CT was acquired. While internal cardiac motion was visible only on the CE 4D-CT, the native CT was used to calculate ion stopping power. For each scan, 10 equally distributed 4D-CT phases of the cardiac cycle were reconstructed and used as a basis for treatment planning. Cardiac motion was assessed using deformable image registration (DIR) of the CE 4D-CT with Plastimatch (Shackleford et al., 2010) to obtain the deformation vector fields (see Table 3 for details). The vector fields were used in conjunction with the native CT to compute 4D-doses using correct estimates for both motion and beam ranges. Targets and OARs were delineated and propagated to all 4D-CT phases. Margins were added to the targets, and subsequently a range-considering ITV (see Graeff C et al., 2012, Motion mitigation in intensity modulated particle therapy by internal target volumes covering range changes Med. Phys 39 6004-13) was computed to form the planning target volume (PTV). For all targets, two laterally opposing fields were used. Plan optimization was performed on the resulting planning target volume (PTV) and the native 4D-CT 0% phase, but dose evaluation used 4D-dose calculation under several simulated motion scenarios. Treatments were delivered at the fixed horizontal beam line of GSI, Darmstadt. The beam was gated except during enforced breath hold of up to 60 sec. During these breath holds, irradiation was carried out over the whole cardiac cycle. All plans were rescanned to mitigate interplay following an inhomogeneous slice-by-slice scheme with 15 rescans in the slice of highest energy and 1 rescan in the lowest. The rationale for this scheme was a reduction in the irradiation duration of around 60% while still achieving adequate 4D-target coverage in treatment planning. ECG signal and beam delivery sequence events: A scheme of the data acquisition system and the acquired signals is given inFIG.6. We implemented a real-time data acquisition system (DAQ) to simultaneously acquire the surface ECG signal of the animals and the synchronized beam delivery sequence (BDS) using a set of signals provided by the control system. The BDS constitutes the temporal structure of the beam delivery, i.e. the time points at which the beam is switched on or off or at which the irradiation of individual raster points is completed (seeFIG.7). Data acquisition of all signals was performed at a sampling rate of 1 kHz using a Beckhoff EtherCAT system (Beckhoff Automation, Verl, Germany). Delivered treatment plans: The GSI TCS provides acquisition of the actually delivered beam parameters applied per pencil beam. In detail, these are: (i) The actual lateral pencil beam positions in two dimensions (x,y) as controlled by the position feedback of the beam monitoring system; and (ii) The actually delivered particle number (N) as measured with the ionization chambers of the beam monitoring system, including daily calibration factors. We have incorporated these measured data from the GSI treatment records into actual delivered treatment plans entering into our 4D calculations instead of the nominal treatment plans (see alsoFIG.6). Due to incomplete treatment records after recovery from an interlock during delivery, 4D-dose reconstructions could not be performed for 3 out of 14 irradiated animals (see Table 3). 4D-dose reconstruction interface: A custom developed graphical user interface (GUI) has been implemented using Python and the PyQt framework to provide an intuitive platform to guide the user through the 4D-dose reconstruction stages (seeFIG.1). The GUI served as a database to manage the acquired ECG and BDS data as well as animal-specific treatment planning data. Further signal processing steps were performed triggering external programs and keeping track of the results. Moreover, the GUI was used to generate and organize the required input and steering files for 4D-dose calculation with TRiP4D. Resulting 4D-dose distributions were fed back into the GUI's database and could be communicated to external visualization software for further analysis. In the following sections, the ECG signal processing and 4D-dose calculation steps are described in detail. R-wave detection algorithm: The ECG signal recorded during irradiation was used as a motion surrogate to map phases of the ECG cycle to the corresponding 4D-CT phase. To this end, the R-waves of the surface ECG (seeFIG.1) were detected with a non-real-time signal processing algorithm based on the method described by Pan and Tompkins (See Pan J and Tompkins W J 1985 A real-time QRS detection algorithm IEEE Trans Biomed Eng 32 230-6). The algorithm was implemented in an in-house C program as follows: 1. Pass band filtering in the frequency range of 8-55 Hz. We implemented this filter using a Fast Fourier Transform 2. Signal differentiation using a five point derivative (Pan and Tompkins, 1985) and pointwise squaring to enhance the R-waves and increase the signal to noise ratio 3. Temporal averaging of the differentiated and squared signal over 120 samples, corresponding to 120 ms for our sampling rate of 1 kHz. 4. Maximum search in the filtered ECG signal within a time window defined by discriminating the time-averaged signal against a fixed threshold defined as the mean value of the time-averaged signal plus 0.5 times its root-mean-square. The largest local maximum within each window was identified as an R-wave candidate. 5. To avoid potential erroneous detection/oversensing of T and P waves, a subsequent R-wave selection step was executed, comparing the running mean of the R-R distance (RR) over the last 8 detected R-waves against the R-R distances (RR) between the current candidate (CND) and its predecessor (PRE) and successor (SUC), respectively. If the distance SUC-PRE <1.5RReither SUC or CND were rejected, depending on which RR was in better agreement withRR. In agreement with the algorithm used by the CT scanner during acquisition, motion states were then distributed over R-R distances in 10 equidistant steps and identified with the corresponding 4D-CT phases as illustrated inFIG.7. 4D-dose calculation 4D-dose calculation was performed with the 4D treatment simulation functionality of TRiP4D. Details have been published elsewhere. Some of the steps uniquely applied in this study are briefly introduced here: 1. Mapping of each raster point to the respective 4D-CT phase based on the pre-processed ECG signal and the temporally correlated BDS as illustrated inFIG.2. The mapping results into a 4D set of treatment plans, each containing raster points delivered in the respective 4D-CT phase. 2. 4D physical dose calculation based on the 4D treatment plan. Contributions to each dose voxel are accumulated on the reference 4D-CT phase by transforming the dose grid using the DIR vector fields and considering the changing densities of the 4D-CT. Dose reconstruction was performed individually for each field. Total treatment dose distributions were subsequently formed by direct summation of the physical dose for both fields. Data analysis: For each total dose distribution the mean dose (D−) delivered to the TV, the volume receiving at least 95% of the planned dose (V95), and the homogeneity index HI=D5-D95 were assessed. D5 and D95 denote the dose received by 5% and 95% of the volume, respectively. D95 was also analyzed independently to determine the quality of dose coverage. For each OAR we report the mean dose (D−) and the maximum point dose (Dmax). Results: 4D-dose reconstructions were performed with TRiP4D via the custom developed GUI allowing efficient signal processing and data preparation. First calculations were conducted for single fields within about 30 min after treatment for some of the animals and allowed preliminary dose quality assurance. Results presented here were obtained from the final calculations conducted offline. Observed cardiac motion from deformable image registration in the 4D-CTs was below 5 mm for all animals and targets, in line with motion described in men. Average amplitudes were 3.8 (range: 2.2-4.8) mm, 2.9 (1.8-3.9) mm, and 2.8 (1.8-4.4) mm, for the AV, PV, and LV target groups, respectively. Total irradiation times per field including respiratory gating were 9-21 min. Target coverage:FIG.8shows the results for the reconstructed D95 values of the TV and PTV, respectively.FIG.9shows dose cuts for reconstructed and planned 4D-doses for animals from all target groups. TV D95 values were >95% for all but one animal. The lower value for animal #8 is caused by technical problems, leading to misdelivered raster points for part of the TV. For LV and PV targets, planned and reconstructed TV D95 were comparable, while for AV targets larger variation was observed. Deviations for PTV D95 were larger than for TV D95, in particular for animals #2, #6, and #8 irradiated at the AV. It should be noted that the planned 4D-dose already showed a reduced D95 for these animals in the PTV (data not shown). Table 4 lists reconstructed V95, HI andDfor TV and PTV volumes for all animals. With the exception of pig #8, all animals exhibited TV V95 of about 100%. PTV coverage was slightly reduced for most animals; to a larger extent for several animals of the AV group. HI values in the PTV exhibit larger variability indicating increased dose inhomogeneity compared to the TV, in particular for AV targets. TV and PTV volumes both show a systematic increase inDby around 5% with respect to the static dose reconstruction at about 100% of the planned dose. TABLE 4Table 4: 4D-dose reconstruction results. Listed are the volumesreceiving at least 95% of the planned dose (V95), the homogeneityindex (HI) and the mean dose (D) in the TV andPTV volumes, respectively. Asterisks mark identical valuesfor PTV and TV results, due to the fact that no additionalPTV margins were added for LV targets.TVHIDPTVHIDanimaltargetV95 [%][%][%]V95 [%][%][%]1AV100.04.7109.0100.07.0107.02AV100.014.8111.081.223.1106.03AV100.09.6104.094.817.6103.05AV100.05.6107.099.99.7105.06AV98.05.697.864.116.095.68AV68.214.097.063.128.595.812PV100.09.8104.095.012.9102.014PV100.07.6105.098.310.4103.015LV100.08.4105.0100.0*8.4*105.0*16LV100.08.0104.0100.0*8.0*104.0*17LV100.011.5105.0100.0*11.5*105.0* Organs at risk: OAR exposure in comparison to the planned dose is reported inFIG.10, relative to the planning dose constraints. The median difference was 0.1%, the standard deviation 4.5%. The two outliers receiving a lower dose were the ascending aorta, which was in close vicinity to the target in animals #3 and #8. The single over-dosed OAR is the LCA in #12, where the max point dose constraint of 30 Gy was already violated in treatment planning. Discussion/Overview: In this study, we developed and successfully applied a 4D-dose reconstruction technique based on measured beam delivery sequences and for the first time for cardiac motion that was detected via a surface ECG surrogate. In contrast to previous applications of 4D-dose reconstruction, the workflow was improved so that preliminary data could be evaluated shortly after the irradiation. This permitted additional QA with respect to the irradiation of following animals. The reconstructed 4D-dose distributions showed acceptable target coverage (D95) for most of the treated animals, especially for LV-free wall and PV targets. The considerably smaller AV target volume showed reduced coverage of the PTV in some animals, and in retrospective data analysis also increased dose inhomogeneity throughout the TV and PTV volumes. This indicates that the applied rescanning approach could not fully mitigate interplay effects for the extremely small volumes and would have to be modified by, for instance, increasing the number of rescans, to provide increased robustness. To a smaller degree, also PV targets irradiated with IMPT showed a remaining impact of interplay (see Table 4). Importance of respiratory motion suppression: Planning/compensating for respiration and cardiac motion is important for reducing side effects. In contrast to respiratory motion, where internal-external correlation mismatches and baseline drifts can be sources of substantial uncertainties, the impact of ECG variability on the reconstructed 4D-dose can be expected to be much smaller. Due to its physiological origin, the ECG and cardiac motion are highly correlated during normal sinus rhythm. Therefore, with changes in heart rate covered by R-peak detection, the ECG could form an adequate surrogate for cardiac motion. Other methods to obtain a surrogate for heart motion can also be used, such as use of continuous wave radar to detect the heart rate and phase; advantages of this approach are that it does not require any instrumentation to be in contact with the patient's skin, and it results in absolute motion amplitudes. This approach can improve cardiac irradiations as it would make cardiac amplitudes available during irradiation. In another approach, heart and/or respiratory motion signals can be derived intrinsically from the raw data information at the CT reconstruction stage. This method could be combined with a surrogate available during irradiation to identify motion phases online. 4D-dose reconstruction: Online 4D-dose calculation is improved by implementing a GUI to optimize the 4D workflow. Using this GUI we could substantially accelerate the 4D-dose reconstruction workflow and obtain dose reconstruction results within minutes or hours instead of days. By reducing current limitations, such as manual data transfer, and further accelerating data processing and dose calculation, 4D-dose reconstructions can be performed immediately after treatment to obtain results within a few minutes. The improvements of our method are by no means limited to cardiac treatments but can readily be applied, for example, for treatments of patients with cancer disease as well as for 4D phantom measurements, e.g., for plan verification. In the implementation of this study, the reconstruction workflow is the acquisition and processing of a surface ECG signal in contrast of a breathing trace. R-wave detection enabled to obtain the respective ECG phase which could be correlated to the cardiac 4D-CT phase (seeFIG.7). Since our 4DTPS is capable of using the signal phase to generate a 4D treatment plan, no adjustments were required from the TPS side. However, if both respiratory and cardiac motion are present, a more general approach may be used. If a 4D-CT is acquired such that it provides all N cardiac phases for each of the M respiratory phases, i.e. it has K=N×M phases, the current 4D-CT phase can be determined using the combination of a respiratory and an ECG surrogate and mapping the two dimensional phase index (n, m) to a one dimensional one: (n, m)→k=1 . . . K. In this way 4D-dose reconstruction for mixed organ motion can be performed for a K phase 4DCT without changes to the 4D-dose calculation algorithm in our TPS. However, it should be noted that image registration is required for mapping all K phases to a single reference phase in certain implementations. Such an approach could be used to either treat free-breathing patients, or to include breath-hold variability in a simulation study or dose reconstruction, provided that appropriate images are available. Improved image guidance for more precise dose reconstruction: In certain configurations, application of cone-beam CT (CBCT) or online MRI could substantially reduce positioning uncertainties, due to improved soft tissue contrast. In other configurations, online MRI could offer both excellent soft tissue identification and possibly also time-resolved targeting options, provided that MR image formation can be achieved at sufficient speed, image quality and resolution. This study thus demonstrates surface-ECG based 4D-dose reconstruction for scanned ion beam treatment of electrophysiological target sites in the beating heart in a setting similar to clinical patient treatments. Estimation of the 4D delivered dose can contribute to ensure safe treatment of cardiac structures and is a helpful tool for dose verification. Beyond treatment of cardiac arrhythmia, also ion beam treatment of moving targets in radiotherapy of cancer diseases will benefit from these improvements as well. As suggested, focused photonic therapy could be used in Langendorff preparations and in situ to ablate the AV node without use of catheters. This has been extended further in AV nodes, atrial tissue, and ventricular myocardium in intact pigs, with hadron therapy delivered in pencil beam formats to destroy arrhythmogenic tissue without using catheters. It is noted, however, that the linear accelerator based system is not restricted to the above applications as well. This approach could be used in such diverse applications as targeting of renal arteries to treat hypertension, treatment of seizures, treatment of occluded cardiac holes, noninvasive treatment of gastrointestinal maladies, modulation of nerve fibers, etc. For noncardiac applications, contouring may be site specific such that, for example, for the perirenal nerves, we may not need contouring of contractile motion but intraabdominal respiratory gating, descending aorta pulsations, other arterial pulsations, and ureteric and renal pelvis peristalsis. Multiple, independently motile organs and structures may exist in close proximity. For the cardiac ventricular chambers and systemic arterial systems such as the aorta, descending aorta, iliac artery, carotid artery, etc., the electrocardiogram being used as a trigger with variable time offsets that increase with distance from the ventricles (greater time delay to the iliac artery compared to the ascending aorta, etc.) may be used to identify and track the movement of these structures. Contouring and modeling with the known geometry of the cylindrical aorta versus hemispherical aortic sinus of Valsalva, etc., may still be reliably approximated from knowledge of the onset of systole whose surrogate is the beginning of the QRS complex. On the other hand, venous, smooth muscle, and palatal muscle movement are not reliably predicted based on the electrocardiogram. For these, modification of both the method for tracking movement and the linear accelerator may be required for effective therapy. For instance, a simpler, smaller linear accelerator without sophisticated tracking and contouring may be used for structures such as the perinephric autonomic plexuses and nerves since movement of the kidney and its related vessels other than the renal artery which can be tracked as other arteries based on the cardiac cycle is minimal. However, in some instances, adequate knowledge of random skeletal muscle movement as well as peristaltic movement seen in smooth muscles including in the ureter and gastrointestinal tract may be essential for successful treatment of pathology around these structures. Here, a modification includes a vest or girdle placed on the patient so as to track in real time impedance and mechanical movements with contouring and tracking of large vessels from the ECG subtracted from these overall changes in three-dimensional impedance and mechanical movement. Based on this, sinuous peristaltic movement may be distinguished and related in depth to a structure in the region of projection known to produce such motility. Similarly, stimulating beams may be used to stimulate cerebral muscle and/or smooth muscle with resulting change in motion now being diagnosed to be from a particular structure with its own now identified as unique pattern in impedance change. Further modification of the linear accelerator to include an adjunctive, adjoined, or integrated ultrasound beam delivery device so as to mechanically stimulate or move sensitive structures wherein the beam is synchronized to particle beam delivery is an essential part of some applications of the present disclosure, for example, when a hiatal hernia has juxtaposed itself and gastric/intestinal contents through the foramen of Bochdalek or the foramina of Morgagni, which are not uncommon gaps in the diaphragm that normally separate the heart from these structures capable of peristalsis and where failure to recognize and differentiate the mobility may result in serious complication when delivering energy for arrhythmogenic cardiac substrate not meant for the intestinal structures. The present invention has been described in terms of one or more preferred embodiments, and it should be appreciated that many equivalents, alternatives, variations, additions, and modifications, aside from those expressly stated, and apart from combining the different features of the foregoing embodiments in varying ways, can be made and are within the scope of the invention. In the above description, a number of specific details, examples, and scenarios are set forth in order to provide a better understanding of the present disclosure. These examples and scenarios are provided for illustration, and are not intended to limit the disclosure in any way. The true scope of the invention will be defined by the claims included in this and any later-filed patent applications in the same family. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation. References in the specification to an “embodiment,” an “example,” a “version,” an “implementation,” a “configuration,” an “instance,” an “iteration,” etc., indicate that the embodiment, example, version, etc. described may include one or more particular features, structures, or characteristics, but not every embodiment, example, version, etc. necessarily incorporates the particular features, structures, or characteristics. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is believed to be within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly indicated. The computerized functionality described above may be implemented in hardware, firmware, software, single integrated devices, multiple devices in wired or wireless communication, or any combination thereof. Computerized functions may be implemented as instructions stored using one or more machine-readable media, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine. For example, a machine-readable medium may include any suitable form of volatile or non-volatile memory. In the drawings, specific arrangements or orderings of schematic elements may be shown for ease of description. However, the specific ordering or arrangement of such elements is not meant to imply that a particular order or sequence of processing, or separation of processes, is required in all embodiments. Further, some connections or relationships between elements may be simplified or not shown in the drawings so as not to obscure the disclosure. This disclosure is to be considered as exemplary and not restrictive in character, and all changes and modifications that come within the spirit of the disclosure are desired to be protected.
68,726
11857809
DETAILED DESCRIPTION OF THE INVENTION The invention provides compositions featuring TRP-4 polypeptides and polynucleotides, methods for expressing such polypeptides and polynucleotides in a cell type of interest, and methods for inducing the activation of the TRP-4 polypeptide in neurons and other cell types using ultrasound. The invention is based, at least in part, on the discovery that misexpression of TRP-4, a pore-forming subunit of a mechanotransduction channel, sensitizes cells to an ultrasound stimulus resulting in calcium influx and motor outputs. Accordingly, this approach can be used to alter cellular functions in vivo. Accordingly, the invention provides polynucleotides encoding a TRP4 polypeptide, expression vectors comprising such polynucleotides, cells expressing a recombinant TRP4 polypeptide, and methods for stimulating such cells with ultrasound. Ultrasound Ultrasound is well suited for stimulating neuron populations as it focuses easily through intact thin bone and deep tissue (K. Hynynen and F. A. Jolesz,Ultrasound Med Biol24 (2), 275 (1998)) to volumes of just a few cubic millimeters (G. T. Clement and K. Hynynen,Phys Med Biol47 (8), 1219 (2002)). The non-invasive nature of ultrasound stimulation is particularly significant for manipulating vertebrate neurons including those in humans, as it eliminates the need for surgery to insert light fibers (required for some current optogenetic methods). Also, the small focal volume of the ultrasound wave compares well with light that is scattered by multiple layers of brain tissue (S. I. Al-Juboori, A. Dondzillo, E. A. Stubblefield et al.,PLoS ONE8 (7), e67626 (2013)). Moreover, ultrasound has been previously used to manipulate deep nerve structures in human hands and reduce chronic pain (W. D. O'Brien, Jr.,Prog Biophys Mol Biol93 (1-3), 212 (2007); L. R. Gavrilov, G. V. Gersuni, O. B. Ilyinsky et al.,Prog Brain Res43, 279 (1976)). The invention provides for novel non-invasive compositions for the expression of TRP4 in cells, and methods to stimulate cells expressing TRP4 using low-intensity ultrasound stimulation. Cellular Compositions Comprising Recombinant TRP-4 The invention provides cells comprising a recombinant nucleic acid molecule encoding a TRP-4 polypeptide. In one embodiment, the invention provides a cardiac muscle cell comprising a TRP-4 polynucleotide under the control of a promoter suitable for expression in a cardiac cell (e.g., NCX1 promoter). In another embodiment, the invention provides a muscle cell comprising a TRP-4 polynucleotide under the control of a promoter suitable for expression in a muscle cell (e.g., myoD promoter). In another embodiment, the invention provides an insulin secreting cell (e.g., beta islet cell) comprising a TRP-4 polynucleotide under the control of a promoter suitable for expression in an insulin-secreting cell (e.g., Pdx1 promoter). In another embodiment, the invention provides an adipocyte comprising a TRP-4 polynucleotide under the control of a promoter suitable for expression in an adipocyte (e.g., iaP2). In another embodiment, the invention provides a neuron comprising a TRP-4 polynucleotide under the control of a promoter suitable for expression in a neuron (e.g., nestin, Tuj 1 promoter), in a motor neuron (e.g., H2b promoter), in an interneuron (e.g., Islet 1 promoter), in a sensory neuron (e.g., OMP promoter, T1R, T2R promoter, rhodopsin promoter, Trp channel promoter). Such cells may be cells in vitro or in vivo. In particular embodiments, the cells express a mechanotransduction polypeptide that is a transient receptor potential channel-N(TRPN) polypeptide that is sensitive to ultrasound. In particular embodiments, the mechanotransduction polypeptide is TRP-4 or a functional portion or homolog thereof. In embodiments, the mechanotransduction polypeptide comprises or consists of the amino acid sequence of SEQ ID NO:1. Expression of Recombinant TRP-4 In one approach, a cell of interest (e.g., neuron, such as a motor neuron, sensory neuron, neuron of the central nervous system, or neuronal cell lines) is engineered to express a TRP-4 polynucleotide whose expression renders the cell responsive to ultrasound stimulation. Ultrasound stimulation of such cells induces cation influx. TRP-4 may be constitutively expressed or its expression may be regulated by an inducible promoter or other control mechanism where conditions necessitate highly controlled regulation or timing of the expression of a TRP-4 protein. For example, heterologous DNA encoding a TRP4 gene to be expressed is inserted in one or more pre-selected DNA sequences. This can be accomplished by homologous recombination or by viral integration into the host cell genome. The desired gene sequence can also be incorporated into a cell, particularly into its nucleus, using a plasmid expression vector and a nuclear localization sequence. Methods for directing polynucleotides to the nucleus have been described in the art. The genetic material can be introduced using promoters that will allow for the gene of interest to be positively or negatively induced using certain chemicals/drugs, to be eliminated following administration of a given drug/chemical, or can be tagged to allow induction by chemicals, or expression in specific cell compartments. Calcium phosphate transfection can be used to introduce plasmid DNA containing a target gene or polynucleotide into cells and is a standard method of DNA transfer to those of skill in the art. DEAE-dextran transfection, which is also known to those of skill in the art, may be preferred over calcium phosphate transfection where transient transfection is desired, as it is often more efficient. Since the cells of the present invention are isolated cells, microinjection can be particularly effective for transferring genetic material into the cells. This method is advantageous because it provides delivery of the desired genetic material directly to the nucleus, avoiding both cytoplasmic and lysosomal degradation of the injected polynucleotide. Cells can also be genetically modified using electroporation. Liposomal delivery of DNA or RNA to genetically modify the cells can be performed using cationic liposomes, which form a stable complex with the polynucleotide. For stabilization of the liposome complex, dioleoyl phosphatidylethanolamine (DOPE) or dioleoyl phosphatidylcholine (DOPA) can be added. Commercially available reagents for liposomal transfer include Lipofectin (Life Technologies). Lipofectin, for example, is a mixture of the cationic lipid N-[1-(2, 3-dioleyloxy)propyl]-N—N—N-trimethyl ammonia chloride and DOPE. Liposomes can carry larger pieces of DNA, can generally protect the polynucleotide from degradation, and can be targeted to specific cells or tissues. Cationic lipid-mediated gene transfer efficiency can be enhanced by incorporating purified viral or cellular envelope components, such as the purified G glycoprotein of the vesicular stomatitis virus envelope (VSV-G). Gene transfer techniques which have been shown effective for delivery of DNA into primary and established mammalian cell lines using lipopolyamine-coated DNA can be used to introduce target DNA into the de-differentiated cells or reprogrammed cells described herein. Naked plasmid DNA can be injected directly into a tissue comprising cells of interest. Microprojectile gene transfer can also be used to transfer genes into cells either in vitro or in vivo. The basic procedure for microprojectile gene transfer was described by J. Wolff in Gene Therapeutics (1994), page 195. Similarly, microparticle injection techniques have been described previously, and methods are known to those of skill in the art. Signal peptides can be also attached to plasmid DNA to direct the DNA to the nucleus for more efficient expression. Viral vectors are used to genetically alter cells of the present invention and their progeny. Viral vectors are used, as are the physical methods previously described, to deliver one or more polynucleotide sequences encoding TRP4, for example, into the cells. Viral vectors and methods for using them to deliver DNA to cells are well known to those of skill in the art. Examples of viral vectors that can be used to genetically alter the cells of the present invention include, but are not limited to, adenoviral vectors, adeno-associated viral vectors, retroviral vectors (including lentiviral vectors), alphaviral vectors (e. g., Sindbis vectors), and herpes virus vectors. Targeted Cell Types TRP-4 can be expressed in virtually any eukaryotic or prokaryotic cell of interest. In one embodiment, the cell is a bacterial cell or other pathogenic cell type. In another embodiment, the cell is a mammalian cell, such as an adipocyte, muscle cell, cardiac muscle cell, insulin secreting cell (e.g., beta islet cell), and neuron (e.g., motor neuron, sensory neuron, neuron of the central nervous system, and neuronal cell line). Methods of Stimulating a Neural Cell The methods provided herein are, inter alia, useful for the stimulation (activation) of cells. In particular, ultrasound stimulation induces cation influx, thereby altering cell activity. Expression of TRP-4 in a pathogen cell (bacteria) and subsequent ultrasound stimulation induces cation influx and bacterial cell killing. Ultrasound stimulation of a muscle cell expressing TRP-4 results in muscle contraction. This can be used to enhance muscle contraction or functionality in subjects in need thereof, including subjects suffering from muscle weakness, paralysis, or muscle wasting. Altering the intensity of the ultrasound modulates the extent of muscle activity. The term “neural cell” as provided herein refers to a cell of the brain or nervous system. Non-limiting examples of neural cells include neurons, glia cells, astrocytes, oligodendrocytes and microglia cells. Where a neural cell is stimulated, a function or activity (e.g., excitability) of the neural cell is modulated by modulating, for example, the expression or activity of a given gene or protein (e.g., TRP-4) within said neural cell. The change in expression or activity may be 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90% or more in comparison to a control (e.g., unstimulated cell). In certain instances, expression or activity is 1.5-fold, 2-fold, 3-fold, 4-fold, 5-fold, 10-fold or higher than the expression or activity in the absence of stimulation. In certain instances, expression or activity is 1.5-fold, 2-fold, 3-fold, 4-fold, 5-fold, 10-fold or lower than the expression or activity in the absence of stimulation. The neural cell may be stimulated by applying an ultrasonic wave to the neural cell. The term “applying” as provided herein is used in accordance with its plain ordinary meaning and includes the meaning of the terms contacting, introducing and exposing. An “ultrasonic wave” as provided herein is an oscillating sound pressure wave having a frequency greater than the upper limit of the human hearing range. Ultrasound (ultrasonic wave) is thus not separated from ‘normal’ (audible) sound by differences in physical properties, only by the fact that humans cannot hear it. Although this limit varies from person to person, it is approximately 20 kilohertz (20,000 hertz) in healthy, young adults. Ultrasound (ultrasonic wave) devices operate with frequencies from 20 kHz up to several gigahertz. The methods provided herein use the energy of an ultrasonic wave to stimulate a neural cell expressing an exogenous mechanotransduction protein. A mechanotransduction protein as provided herein refers to a cellular protein capable of converting a mechanical stimulus (e.g., sound, pressure, movement) into chemical activity. Cellular responses to mechanotransduction are variable and give rise to a variety of changes and sensations. In embodiments, the mechanotransduction protein is a mechanically gated ion channel, which makes it possible for sound, pressure, or movement to cause a change in the excitability of a cell (e.g., a sensory neuron). The stimulation of a mechanotransduction protein may cause mechanically sensitive ion channels to open and produce a transduction current that changes the membrane potential of a cell. In one aspect, a method of stimulating a cell is provided. The method includes (i) transfecting a cell with a recombinant vector including a nucleic acid sequence encoding an exogenous mechanotransduction polypeptide, thereby forming a transfected cell. (ii) To the transfected cell an ultrasonic wave is applied, thereby stimulating a cell. In embodiments, the mechanotransduction polypeptide is a transient receptor potential channel-N(TRPN) polypeptide or homolog thereof. In embodiments, the mechanotransduction polypeptide is TRP-4 or a functional portion or homolog thereof. In embodiments, the mechanotransduction polypeptide includes the amino acid sequence of TRP4 SEQ ID NO:1. In embodiments, the mechanotransduction polypeptide is the sequence of SEQ ID NO:1. In embodiments, the ultrasonic wave has a frequency of about 0.8 MHz to about 4 MHz. In embodiments, the ultrasonic wave has a frequency of about 1 MHz to about 3 MHz. In embodiments, the ultrasonic wave has a focal zone of about 1 cubic millimeter to about 1 cubic centimeter. In embodiments, the method further includes before the applying of step (ii) contacting the transfected neural cell with an ultrasound contrast agent. In embodiments, the ultrasound contrast agent is a microbubble. In embodiments, the microbubble has a diameter of about 1 μm to about 6 μm. In embodiments, the neural cell forms part of an organism. In embodiments, the organism is a bacterial cell or mammalian cell (e.g., human, murine, bovine, feline, canine). Methods of Treatment In another aspect, a method of treating a neurological disease in a subject in need thereof is provided. The method includes (i) administering to a subject a therapeutically effective amount of a recombinant nucleic acid encoding an exogenous mechanotransduction polypeptide (e.g., TRP-4). In step (ii) an ultrasonic wave is applied to the subject, resulting in a change in TRP-4 conductance, i.e., cation influx. In one embodiment, the methods treat a cardiac disease by enhancing cardiac muscle activity or neurological disease by altering neural activity in the subject. In embodiments, the neurological disease is Parkinson Disease, depression, obsessive-compulsive disorder, chronic pain, epilepsy or cervical spinal cord injury. In embodiments, the neurological disease is retinal degeneration or atrial fibrillation. In embodiments, the mechanotransduction polypeptide is a transient receptor potential channel-N(TRPN) polypeptide or homolog thereof. In embodiments, the mechanotransduction polypeptide is TRP-4 or a functional portion or homolog thereof. In embodiments, the method further includes before the applying of step (ii) administering to the subject an ultrasound contrast agent. In embodiments, the ultrasound contrast agent is a microbubble. In embodiments, the microbubble has a diameter of about 1 μm to about 6 μm, and is injected into the body (e.g., the brain) where it enhances ultrasound stimulation. EXAMPLES Reliable activation of identified neurons, particularly those in deeper brain regions remains a major challenge in neuroscience. Here, Applicants demonstrate low intensity ultrasound as a non-invasive trigger to activate neurons in the nematode,Caenorhabditis elegans. Applicants show that neuron-specific misexpression of TRP-4, the pore-forming subunit of a mechanotransduction channel, activates those cells in response to ultrasound stimuli and initiates behavior. Applicants suggest that this method can be broadly used to manipulate cellular functions in vivo. To probe the effects of ultrasound on neuronal function, Applicants chose the nematodeC. elegans, with its small nervous system consisting of just 302 neurons (J. G. White, E. Southgate, J. N. Thomson et al.,Phil. Transact. R. Soc. Lond. B314, 1 (1986)), and strong correlations between individual neurons and robust behaviors (M. de Bono and A. V. Maricq,Annu Rev Neurosci28, 451 (2005); C. I. Bargmann,WormBook,1 (2006); R. O'Hagan and M. Chalfie,Int Rev Neurobiol69, 169 (2006)). Example 1: Imaging Setup Delivers Ultrasound Waves to Animals To investigate the role of ultrasound onC. elegansbehavior, Applicants developed a novel imaging setup (FIG.1A). Low intensity ultrasound was generated from a transducer and focused onto an agar plate where animals were corralled into a small area using a copper solution (FIG.1B). Applicants' setup allowed for the ultrasound wave to be focused to a 1 mm diameter circular area at the agar surface (FIGS.8A-8C). The whole setup was placed in a large tank filled with water to facilitate uniform transduction of the ultrasound wave. Previous studies have shown that at high ultrasound intensities (>2.5 MPa) water vapor bubbles would form spontaneously and collapse rapidly, initiating shockwaves that would compromise the integrity of cell membranes (termed “cavitation”) (C. K. Holland and R. E. Apfel,J Acoust Soc Am88 (5), 2059 (1990); S. Bao, B. D. Thrall, and D. L. Miller,Ultrasound Med Biol23 (6), 953 (1997)). Applicants confirmed these results in Applicants' assay setup and also observed damage to animals at these high ultrasound intensities (data not shown). Applicants chose to focus on low intensity ultrasound to eliminate these damaging effects and found that at these intensities ultrasound had no effect on animal behavior (FIGS.1D and1E). The entire setup was placed in a large tank filled with water to facilitate uniform transduction of the ultrasound wave. Depending on solution or tissue gas concentrations, high ultrasound peak negative pressures (>2.5 MPa) can create inertial cavitation with the resulting shockwaves compromising the integrity of cell membranes. Consistently, Applicants observed that animals exposed to multiple pulses of high ultrasound pressures were unable to maintain their normal body posture (FIG.12). Therefore, Applicants chose to use low-pressure ultrasound, which does not cause these damaging effects, to stimulate animal behavior. Applicants used data from a previous study to estimate the mechanical deformation of the low intensity ultrasound wave (A. P. Brysev, A. F. Bunkin, R. V. Klopotov et al.,Opt. Spectrosc.93 (2), 282 (2002)). Applicants estimate that at this intensity, the ultrasound wave is likely to pass throughC. eleganscausing a mechanical deformation of 0.005 nm, and hypothesized that this small change is unlikely to influence cellular functions in vivo. This hypothesis is consistent with previous studies, which have shown that mechanical changes of this magnitude do not modify either neurons or non-neurons (S. Ito, H. Kume, K. Naruse et al.,Am J Respir Cell Mol Biol38 (4), 407 (2008); K. Shibasaki, N. Murayama, K. Ono et al.,J Neurosci30 (13), 4601 (2010)). Moreover, Applicants found that a single 10 ms duration ultrasound pulse of 2.25 MHz and peak negative pressures below 0.9 MPa had no effect on animal behavior. The mechanical disturbances of the fluid and tissue in the ultrasound focal zone take the form of compression and expansion deformations as well as bulk tissue distortions caused by acoustic radiation forces, but at low-pressures they were not large enough to influenceC. eleganslocomotion. Previous studies have shown that ultrasound waves can cause temperature changes in the focal zone. Applicants first estimated the temperature increase as a result of ultrasound exposure. In a previous study, a continuous 1.1 MHz ultrasound pulse with a peak negative pressure of 2.6 MPa increased the temperature of the surrounding media at the rate of 35° C./sec. Using these data, Applicants estimated that the temperature increase around the worms on the agar surface to be 0.04° C. for single ultrasound pulse at 0.9 MPa. Moreover, Applicants directly measured the magnitude of temperature change on the agar surface using a miniature thermocouple and found that an ultrasound peak negative pressure of 0.7 MPa caused a temperature increase of less than 0.1° C. This is a temperature stimulus that animals includingC. elegansare unlikely to detect. Together, these results show thatC. elegansis unlikely to respond to the temperature and mechanical changes induced by the low-pressure ultrasound wave. Example 2: Microbubbles Amplify the Mechanical Deformation of the Ultrasound Wave To amplify the ultrasound wave, Applicants included gas-filled microbubbles in Applicants' assay (FIG.1C). Previous studies have shown that the majority of the ultrasound energy propagates through water and soft tissue as a longitudinal wave with alternating compression and rarefaction phases. These two phases create pressures that are alternately higher and lower than the ambient pressure level respectively. Applicants designed the microbubbles to respond to the mechanical deformations induced by an ultrasound pulse. Applicants filled the microbubbles with a stabilizing mixture of perfluorohexane and air that allows the compression and rarefaction phases of the ultrasound wave to shrink and expand the microbubbles from one half to four times their original diameters in a process known as stable cavitation. This occurs at the driving frequency of the underlying ultrasound pulse. Applicants found that animals showed a dramatic response to ultrasound when surrounded by microbubbles (FIGS.1D and1F). When the ultrasound wave was focused on the head of a worm, the animal immediately initiated a backward movement (termed “reversal”) followed by a high-angled turn (labeled “omega bend”) (FIGS.1D and13). These behaviors were scored as previously described (FIG.13) (J. M. Gray, J. J. Hill, and C. I. Bargmann,Proc Natl Acad Sci USA102 (9), 3184 (2005)) and quantified as shown (FIGS.1E and1F). The animal's behavioral responses were correlated with the intensity of the ultrasound wave (FIG.1F) and the size of the microbubbles (FIG.14). Applicants suggest that microbubbles (1-3 μm in diameter, mixed size) are likely to resonate with the 2.25 MHz ultrasound pulse causing large mechanical fluctuations around the animal and in turn, reversal behavior. To probe how microbubbles transduce the ultrasound wave and modify animal behavior Applicants analyzed microbubbles labeled with fluorescent DiO (FIG.1C). Applicants found that microbubbles are evenly distributed around the animal and upon ultrasound stimulation some are destroyed, while others fuse and yet others move (FIGS.4A-4C). These results suggest that these fluctuations in microbubbles around the animal are sufficient to initiate reversal behavior. Ultrasound waves have been previously shown to cause an increase in temperature in the focal zone (C. H. Fully, R. G. Holt, and R. A. Roy,Biomedical Engineering, IEEE Transactions on57 (1), 175 (2010)). Using this dataset, Applicants estimate that the low-intensity ultrasound pulse (2.25 MHz) might cause a temperature increase of 0.04° C. on the agar surface, a stimulus that animals includingC. elegansare unlikely to detect (I. Mori, H. Sasakura, and A. Kuhara,Curr Opin Neurobiol17 (6), 712 (2007); D. A. Clark, C. V. Gabel, H. Gabel et al., J Neurosci 27 (23), 6083 (2007)). Taken together, these results suggest that mechanical distortions around the worm transduce the ultrasound stimulus and initiate behavioral changes. Example 3: TRP-4 Stretch Sensitive Ion Channels Sensitize Neurons to Ultrasound Applicants hypothesized that ultrasound is a mechanical stimulus that require specific mechanotransduction channels to transduce the signals in individual neurons. Applicants tested the ability of TRP-4, a pore forming cation-selective mechanotransduction channel (L. Kang, J. Gao, W. R. Schafer et al.,Neuron67 (3), 381 (2010); W. Li, Z. Feng, P. W. Sternberg et al., Nature 440 (7084), 684 (2006)), to transduce this ultrasound induced mechanical stimulus. This channel is specifically expressed in a fewC. elegansneurons, the four CEPs (CEPDL, CEPDR, CEPVL and CEPVR) and the two ADE (ADEL and ADER) dopaminergic neurons and the DVA and DVC interneurons (L. Kang, J. Gao, W. R. Schafer et al.,Neuron67 (3), 381 (2010); W. Li, Z. Feng, P. W. Sternberg et al., Nature 440 (7084), 684 (2006)). TRP-4 is both necessary and sufficient to generate mechanoreceptor currents in CEP neurons. Applicants found that animals missing TRP-4 have reduced responses to specific intensities (0.41 and 0.47 MPa peak negative pressure) of ultrasound stimulation, which suggests that this channel is required to generate reversals (FIG.2A). In contrast, trp-4 mutants do not show any significant change in their omega bend behaviors upon ultrasound stimulation (FIGS.5A-5F). At higher intensities, trp-4 mutants have similar responses compared to wildtype, which suggests that there is an alternate pathway that detects ultrasound at these intensities. Collectively, these results suggest that TRP-4 might be activated in response to ultrasound with peak negative pressure levels less than 0.5 MPa and modifies neurons involved in generating small and large reversals. To test whether ultrasound sensitivity could be conferred to additional neurons, Applicants analyzed the behavior of transgenic animals misexpressing TRP-4 in specific chemosensory neurons. Applicants initially misexpressed this channel in ASH, a well-studied polymodal nociceptive neuron (M. A. Hilliard, C. Bergamasco, S. Arbucci et al.,Embo J23 (5), 1101 (2004)), whose activation leads to reversals and omega bends (Z. V. Guo, A. C. Hart, and S. Ramanathan,Nat Methods6 (12), 891 (2009)). Consistently, Applicants found that ASH expression of TRP-4 generated a significant increase in reversals at ultrasound intensity with a peak negative pressure of 0.47 MPa (FIG.2B). Moreover, Applicants found that these ASH::trp-4 transgenics do not show any change in their omega bend responses (FIG.5), confirming that this channel specifically modifies the reversal neural circuit. Next, Applicants tested the effects of TRP-4 misexpression on function and behavior of the AWC sensory neuron. Previous results have implied that AWC activation is correlated with an increase in the animal's ability to generate reversals (S. H. Chalasani, N. Chronis, M. Tsunozaki et al.,Nature450 (7166), 63 (2007)). Applicants found that animals misexpressing TRP-4 in AWC neurons also initiated significantly more large reversals at the same ultrasound intensity of 0.47 MPa peak negative pressures, but not omega bends (FIGS.2C and5). To test whether ultrasound could directly stimulate AWC neurons, Applicants recorded the activity of these neurons in animals expressing the calcium indicator, GCaMP3 (L. Tian, S. A. Hires, T. Mao et al.,Nat Methods6 (12), 875 (2009)). Consistent with Applicants' behavioral data, Applicants found that ultrasound stimulation activated AWC neurons (FIGS.2D-2F). Also, Applicants find that AWC responses are significantly reduced in the absence of microbubbles, which suggests that the ultrasound signals need to be amplified before they can modify neuronal functions (FIGS.6A-6D). Consistent with the behavior data, Applicants observe that AWC neurons expressing TRP-4 show a significant increase in their activity a few seconds (t=12 to t=17 seconds) after the ultrasound stimulus (FIG.2F). Both wild-type AWC neurons and those misexpressing TRP-4 showed a response lasting about 2-3 seconds immediately upon exposure to a single ultrasound pulse in the presence of microbubbles. However, Applicants also observed that AWC neurons misexpressing TRP-4 show a significant increase in their activity starting at 7 seconds after ultrasound exposure (t=12 seconds inFIG.5F) and lasting for at least 5 seconds, which is not observed in wild-type neurons. This sustained increase in AWC calcium levels likely represents the activity of TRP-4, which could potentiate calcium entry into the neuron via other calcium channels. Interestingly, large reversals take approximately 10-20 seconds to complete, a time window where Applicants also observe sustained AWC calcium activity in the AWC::trp-4 transgenics. The sustained AWC calcium activity observed in these AWC::trp-4 transgenics is likely correlated with the increased frequency of large reversals generated by these animals after ultrasound stimulation. Taken together, these results show that TRP-4 channels are sensitive to low-pressure ultrasound, and ectopic expression of these channels in sensory neurons causes correlated changes in neuronal activity and behavior. Interestingly, FLP neurons do not respond to ultrasound (FIG.16). Microbubbles are present in all FLP recordings. Example 4: Newly Identified Roles for PVD Sensory and AIY Interneurons in Generating Behavior in the Presence of Microbubbles To test Applicants' approach of analyzing neuronal function by misexpressing TRP-4, Applicants probed the functions of poorly understood PVD neurons (FIG.3A). PVD neurons have extensive dendritic processes that are regularly spaced and non-overlapping and cover most of the animal, excluding the head and the neck (A. Albeg, C. J. Smith, M. Chatzigeorgiou et al., Mol Cell Neurosci 46 (1), 308 (2011)). Applicants find that expressing TRP-4 in PVD neurons leads to a significant decrease in their reversal responses upon ultrasound stimulation (FIG.3B). Applicants hypothesize that PVD neurons suppress reversals and misexpressing TRP-4 channels activates these neurons upon ultrasound stimulation, which in turn suppresses reversals. To test Applicants' hypothesis Applicants monitored PVD neuron activity in response to ultrasound stimulation. Applicants find that PVD neurons are more likely to be activated when the animal is moving backward than when moving forward (FIGS.7A-7C). Also, Applicants find a strong correlation between PVD activity and animal movement. In particular, Applicants find that PVD neurons reach their maximum response when the animal has stopped reversing (FIGS.3C and3D). These results suggest that expressing TRP-4 in PVD neurons activates them upon ultrasound stimulation and causes premature suppression of backward movement leading to fewer reversals. See alsoFIG.17. Applicants' studies show thatC. elegansneural circuits can be probed by combining ultrasound stimulation with microbubbles that amplify the mechanical deformations. Specifically, Applicants find that upon activation ASH and AWC sensory neurons increase in reversals, while activating PVD neurons suppresses reversals (FIG.3E). Interestingly, Applicants identify that persistent AWC neural activity might be required to drive reversal behavior providing a correlation between a distinct AWC neuronal activity pattern and whole animal behavior. Also, Applicants define a novel role for PVD neurons in suppressing reversal behavior. Taken together, these results and other studies (D. Tobin, D. Madsen, A. Kahn-Kirby et al.,Neuron35 (2), 307 (2002)) show that TRP channels can be used to manipulate neuronal functions and thus provide insight into how neural circuits transform environmental changes into behavior. Applicants then tested whether this approach can manipulate the function of an interneuron, whose processes do not contact the external cuticle of the animal. Applicants misexpressed TRP-4 in AIY interneurons, which are at least 25 μm from the cuticle, and analyzed the behavior of these animals upon ultrasound stimulation. Optogenetic studies have previously shown that activating AIY interneurons reduces turns. In contrast, Applicants find that AIY::trp-4 transgenics are significantly more likely to initiate high-angled omega bends upon ultrasound stimulation (two independent transgenics). It is possible that expressing TRP-4 in AIY neurons has altered that neuron's function, leading to increased turns. However, animals with genetically altered AIY function have been shown to have increased turns in a local search assay. Applicants found that these AIY::trp-4 transgenics did not show any defects in local search (FIG.18) confirming that the AIY neurons were not altered in these animals. These data suggest that AIY can initiate different behaviors based on type of stimulation, ultrasound or light. To confirm whether ultrasound stimulus is activating AIY interneurons, we used calcium imaging. AIY neural activity is typically measured from a bulb in the AIY neurite. Consistent with previous observation, Applicants found that AIY is a noisy neuron with a number of transients during recordings (FIG.11). Applicants collected from a number of AIY recordings from wild-type animals and defined the relevant transient. Applicants counted all neurons that responded within a 5.5 second after the ultrasound pulse as responders. Using this criteria, AIY neurons in wild-type animals did not show a significant response to ultrasound stimulus (4/29) (FIGS.11A and11B). In contrast, Applicants observed a significant number of AIY neurons in AIY::trp-4 transgenics (11/28 animals) had a positive response (FIGS.11E and11F). In contrast, Applicants suggest that increased proportion of AIY responders in the AIY::trp-4 transgenics suggests that ultrasound stimulus activates AIY interneurons. These results show that mechanical deformations from the ultrasound-microbubble interaction can penetrate at least 25 μm into the worm and influence the function of AIY interneurons. Moreover, Applicants find that misexpressing TRP-4 can influence both reversal and omega bend neural circuitry, suggesting that the sonogenetic approach is broadly applicable for manipulating circuit activity. Further, these results show that AIY interneurons likely have at least three activity states with one suppressing turns, one promoting forward turns (as revealed by optogenetic stimulation) and one increasing omega turns (as revealed by ultrasound stimulation). These studies validate the approach of using sonogenetics to reveal novel roles for both PVD and AIY neurons in modifying turn behavior. These studies show thatC. elegansneural circuits can be probed by combining low-pressure ultrasound stimulation with microbubbles that amplify the mechanical deformations. Specifically, Applicants found thatC. elegansare insensitive to low-pressure ultrasound but respond when surrounded by microbubbles. Applicants found that animals missing the TRP-4 mechanosensitive ion channel have significantly reduced sensitivity to the ultrasound-microbubble stimulation, indicating that mechanosensitive ion channels play an important role in the mechanism of ultrasound stimulation. Applicants also found that misexpressing the TRP-4 mechanosensitive ion channel in specific neurons modifies their neural activity upon ultrasound stimulation, resulting in altered animal behaviors. Specifically, misexpressing TRP-4 in ASH and AWC sensory neurons results in an increase in large reversals, while activating PVD neurons suppresses this behavior. Applicants also defined novel roles for PVD neurons in suppressing reversal behavior and AIY neurons in stimulating omega bend behavior. These novel methods provide new insights into the neural activity patterns that drive whole-animal behavior. Persistent AWC neural activity might drive reversal behavior, providing a correlation between a distinct AWC neuronal activity pattern and whole-animal behavior. Ultrasound stimulation may activate neurons with different kinetics than what has been seen using optogenetics. For example, activating AIY interneurons using light leads to an increase in forward turns, while using low-pressure ultrasound increases omega bend frequency. These studies indicate an alternative role for AIY in promoting omega bends. The stimulation of AIY interneurons demonstrates that this ultrasound technique can also be applied to deep internal neurons that do not contact the skin of the worm. Taken together, these results and other studies show that TRP channels can be used to manipulate neuronal functions and thus provide insight into how neural circuits transform environmental changes into precise behaviors. In order to target smaller groups of neurons, the resolution of the ultrasound focal zone can be made smaller than the 1 mm diameter. Frequencies above 2.25 MHz can produce sub-millimeter focal zone spot sizes. Higher frequency ultrasound waves with their smaller focal zones are better suited to targets that are closer to the body surface as these waves do not penetrate tissues as well. One of the advantages of ultrasound is that small focal zones can be maintained noninvasively even in deep brain tissue. Outside the focal zone the peak negative pressures are significantly lower and are unlikely to result in neuron activation. This was seen on the agar plates where only worms that were in the focal zone responded to the ultrasound and nearby worms that were outside the focal zone did not. Another advantage of ultrasound is that this focal zone can be moved arbitrarily within the tissue to simulate different regions without any invasive procedures. With an electronically steerable ultrasound beam, multiple different targets can be noninvasively manipulated either simultaneously or in rapid succession. Moreover, the genetic targeting of the stretch sensitive ion channels to individual neurons allows for targeting well below the resolution of the ultrasound focal zone. The use of ultrasound as a non-invasive neuronal activator can be broadly applied to decode neural circuits in larger vertebrate brains with opaque skin and intact skulls. Ultrasound waves with peak negative pressures of <1 MPa have been shown to penetrate through skull and brain tissue with very little impedance or tissue damage. These results show that low-pressure ultrasound (with peak negative pressures 0.4-0.6 MPa) specifically activates neurons expressing the TRP-4 channel. Moreover, TRP-4 channels do not have mammalian homologs, therefore, it is unlikely that expressing these channels in the mammalian brain would produce deleterious effects. This suggests that neurons in diverse model organisms misexpressing this channel can be activated by ultrasound stimulation, allowing scientists to probe their functions in influencing animal behavior. Additionally, other mechanosensitive channels can be explored that may be more sensitive to mechanic deformations than TRP-4. Of particular interest are the bacterial MscL and MscS channels that have different sensitivities to membrane stretch and are selective for different ions. Moreover, TRP-4 and other channels may be mutated in and around the pore region in order to change their ion selectivity as well as their sensitivity to mechanical stretch to broaden the utility of this method. Furthermore, if low-pressure ultrasound stimulation by itself does not activate TRP-4 expressing neurons, the mechanical signals can be amplified by gas-filled microbubbles. Perfluorohexane microbubbles are well-established for use as ultrasound contrast agents in vivo and can be administered intravenously to circulate throughout the vertebrate body including the brain. They can remain active for up to 60 minutes providing a time window where they could be used safely to amplify the ultrasound stimulus and manipulate neural activity. Microbubbles have been shown to undergo inertial cavitation when exposed to ultrasound with peak negative pressure of 0.58 MPa and higher. Using ultrasound pressure levels lower than this will prevent damage to the brain from the microbubble-ultrasound interaction. Moreover, Applicants used a third of the number of microbubbles that has been previously used to successfully image the mouse brain showing that the required microbubble dose would not be prohibitive for in vivo administration. These experiments show that in the presence of microbubbles the low pressure ultrasound stimulated the deep AIY interneurons expressing TRP-4. This result enables Applicants to estimate the distances at which the mechanical deformations from the ultrasound-microbubble interaction can effectively penetrate into brain tissue from the vasculature. TheC. eleganscuticle is 0.5 μm thick and the AIY interneurons are 25 μm from the cuticle, indicating that the mechanical deformations traveled at least 25.5 μm into the worm. In contrast, the mammalian blood-brain barrier is 0.2 μm thick and the average distance of a neuron from a capillary is less than 20 μm. These distances are well within the range of the sonogenetic approach. With the data presented in this paper, the invention provides a novel, non-invasive approach to activate genetically targeted neurons using low-pressure ultrasound stimulation The results described herein above were carried out using the following materials and methods. Behavioral Assay Well-fed young adults were placed on an empty NGM agar plate and corralled into a small area using a filter paper soaked in copper solution (200 mM). A solution (15 μl) of microbubbles at a density of 3.8×107/ml was added to the plate with worms. The worms were allowed to crawl around for 10 minutes before being stimulated by ultrasound. An animal was moved into the fixed ultrasound focal zone and stimulated with one pulse and the resulting reversal and omega bend response is recorded. Reversals with fewer than two head bends were identified as small, while those with more than two were counted as large. High-angled turns that lead to a significant change in the direction of an animal's movement were identified as omega bends (FIG.9) (J. M. Gray, J. J. Hill, and C. I. Bargmann,Proc Natl Acad Sci USA102 (9), 3184 (2005)). Data was analyzed using SPSS software v22 (IBM, NY). Imaging Transgenic animals expressing GCaMP in specific neurons were corralled into a small area by filter paper soaked in copper solution (as described above). The acetylcholine agonist and paralytic, tetramisole (J. A. Lewis, C. H. Wu, J. H. Levine et al., Neuroscience 5 (6), 967 (1980)), was used at 1.3 mM to paralyze the animals to facilitate recording neural activity. These animals were surrounded by a solution of microbubbles and stimulated using ultrasound intensities as described. Fluorescence was recorded at 10 frames/second using an EMCCD camera (Photometrics, Quant-EM) and resulting movies were analyzed using Metamorph software (Molecular Devices) as described (S. H. Chalasani, N. Chronis, M. Tsunozaki et al.,Nature450 (7166), 63 (2007)). Briefly, a fluorescence baseline was calculated using a 3-second window from t=1 to t=4 seconds. The ratio of change in fluorescence to baseline fluorescence was plotted in all graphs using custom MATLAB scripts (S. H. Chalasani, N. Chronis, M. Tsunozaki et al.,Nature450 (7166), 63 (2007)). For imaging PVD neurons, the concentration of the paralytic was reduced to 1 mM, which allowed these animals greater movement. Their motion along with the corresponding fluorescent intensity changes was captured and analyzed using Metamorph software. Microbubble Synthesis Microbubbles were made using a probe sonication technique as described (C. E. Schutt, S. D. Ibsen, M. J. Benchimol et al.,Small(2014)). The stabilizing lipid monolayer consisted of distearoyl phosphatidylcholine (DSPC, Avanti Polar Lipids Inc., Alabaster, Ala.), distearoyl phosphatidylethanolamine-methyl polyethylene glycol (mPEG-DSPE 5 k, Layson Bio Inc., Arab, Ala.) and DiO (Biotium Inc., CA) in 85:13:2 molar ratio. The gas core of the microbubble consisted of perfluorohexane (Sigma-Aldrich, St. Louis, Mo.) and air mixture designed to attain stability under atmospheric pressure. Microbubbles were fractionated based on size by their settling time (FIGS.10A-10C). Applicants chose a mixed size of microbubbles to maintain uniformity across all the experiments. The microbubbles were shown to be stable on agar plates sealed with parafilm for up to 24 hours. Molecular Biology and Transgenic Animals AllC. elegansstrains were grown under standard conditions as described (S. Brenner,Genetics77 (1), 71 (1974)). Cell-selective expression of TRP-4 was achieved by driving the full-length cDNA under odr-3 (AWC), sra-6 (ASH) and des-2 (PVD and FLP) promoters. Germline transformations were obtained using the methods previously described (C. C. Mello, J. M. Kramer, D. Stinchcomb et al.,Embo J10 (12), 3959 (1991)). Complete information for all strains is listed in Table 1. Temperature Estimation Ultrasound stimulation in combination with microbubbles has been previously shown to cause temperature changes in the surrounding media (D. Razansky, P. D. Einziger, and D. R. Adam,IEEE Trans Ultrason Ferroelectr Freq Control53 (1), 137 (2006)). The authors experimentally found a temperature increase of 14.11° C./sec using a continuous 1.1 MHz ultrasound pulse with a peak negative pressure of 2.8 MPa (D. Razansky, P. D. Einziger, and D. R. Adam,IEEE Trans Ultrason Ferroelectr Freq Control53 (1), 137 (2006)). In Applicants' assays, Applicants used pulses of 10 ms and a maximum peak ultrasound pressure at 0.8 MPa. Applicants assumed a linear relationship between energy deposition and peak ultrasound pressure and calculated the temperature increase around the worms on the agar surface to be 0.04° C. Ultrasound and Microscopy Setup A schematic of the ultrasound and microscopy setup is shown inFIG.1Aand previously described (S. Ibsen, M. Benchimol, and S. Esener,Ultrasonics53 (1), 178 (2013)). 10 ms, 2.25 MHz sine wave ultrasound pulse was generated with a submersible 2.25 MHz transducer (V305-Su, Panametrics, Waltham, Mass.) using a waterproof connector cable (BCU-58-6W, Panametrics, Waltham, Mass.). The resulting sound field was quantified using a needle hydrophone (HNP-0400, Onda Corporation, Sunnyvale, Calif.). An arbitrary waveform generator (PCI5412, National Instruments, Austin, Tex.) controlled by a custom designed program (LabVIEW 8.2, National Instruments, Austin, Tex.) was used to create the desired ultrasound pulse. The peak negative pressure of the ultrasound pulse was adjusted from 0 to 0.9 MPa using a 300 W amplifier (VTC2057574, Vox Technologies, Richardson, Tex.). Ultrasound attenuation though the plastic and agar was found to be minimal. White light illumination was achieved by reflecting light from an external light source up at the petri dish using a mirror mounted at 45°. Behavior was captured using a high-speed camera (FASTCAM, Photron, San Diego, Calif.). Fluorescent images were collected using a Nikon 1-FL EPI-fluorescence attachment on the same setup as described. GCaMP imaging was performed using a 40× objective and the images were captured using a Quanti-EM 512C camera (Photometrics, Tucson, Ariz.). The petri dish was held at the air-water interface with a three-prong clamp mounted to an XYZ micromanipulator stage allowing the dish to be scanned in the XY plane, while maintaining a constant Z distance between the objective and ultrasound transducer. This alignment positioned the agar surface in the focal zone of the ultrasound wave. TABLE 1Table showing list of all strains and their genotypesStrainGenotypeDescriptionN2wild-typeWTVC1141trp-4(ok1605)trp-4 mutantIV133ueEx71 [sra-6::trp-4,ASH expression of trp-4elt-2::gfp]in wildtype backgroundIV157ueEx85 [odr-3::trp-4,AWC expression of trp-4elt-2::gfp]in wildtype backgroundCX10536kyEx2595 [str-AWC imaging line in2::GCaMP2.2b, unc-wildtype background122::gfp]IV344ueEx219 [odr-3::trp-4,AWC imaging line withunc-122::rfp], kyEx2595trp-4 expressed in AWC[str-2::GCaMP2.2b,unc-122::gfp]IV242ueEx150 [des-2::trp-4;PVD expression of trp-4elt-2::gfp #3]in wildtype backgroundIV243ueEx151 [des-2::trp-4;PVD expression of trp-4elt-2::gfp #4]in wildtype backgroundIV219ueEx134 [des-PVD and FLP imaging2::GCaMP3, unc-line in wildtype122::rfp]backgroundIV494ueEx307 [ttx-3::trp-4;AIY expression of trp-4 inelt-2::gfp #3]wildtype backgroundIV495ueEx308 [ttx-3::trp-4;AIY expression of trp-4 inelt-2::gfp #4]wildtype backgroundCX8554kyEx1489 [ttx-AIY imaging line in3::GCaMP1.0, unc-wildtype background122::gfp]IV646kyEx1489[ttx-AIY imaging line with trp-3::GCaMP1.0, unc-4 expressed in AIY122::gfp]; ueEx440[ttx-3::trp-4, unc-122::rfp]
48,269
11857810
DETAILED DESCRIPTION The invention provides a system and methods for treating tissue using electromagnetic radiation and microablation techniques. Such a system and microablation techniques form microchannels through a surface of tissue to treat subsurface tissue for any of a number of skin conditions and pathologies. The tissue ablation system according to the invention includes a laser unit and a laser emitting device for ablating microchannels in tissue, such as the system disclosed in assignee's co-pending patent application Ser. No. 11/730,017, filed Mar. 29, 2007 and entitled “System and Method of Microablation of Tissue” (Patent Publication No. 2008/0071258), the entirety of which is incorporated herein by reference. The laser emitting device includes a scanning device configured with a number of mirrors or alternatively a single mirror, or other reflective surfaces, disposed in an arrangement and at an orientation relative to one another such that the laser emitting device emits a laser beam in a given pattern of rays or beams. Software controls the scanning device to emit laser light in a desired beam pattern and/or beam profile to achieve specific treatment protocols. These types of scanning devices are disclosed in assignee's U.S. Pat. Nos. 5,743,902, 5,957,915, and 6,328,733, the entireties of which are incorporated herein by reference. Alternatively, the scanning device may be or may include a laser beam splitter, which is constructed and arranged to deliver a given pattern of treatment radiation to produce multiple treatment areas or “spots.” Such treatment “spots” create multiple microchannels in subsurface tissue that may be distributed in a pattern substantially throughout a tissue treatment area. For instance, using a laser beam splitter, ablation radiation may be varied to achieve a certain fractional pattern of spots along a treatment area to create microchannels having certain parameters, such as certain depths and diameters. The beam splitter may include a multi-lens plate having a plurality of lenses. Some lenses may be configured to focus ablation radiation more than other lenses, such that, some lenses sufficiently focus ablation radiation to penetrate the surface of tissue, while other lenses do not. The plurality of lenses may include lenses having varying size and focal length. The plurality of lenses may include a mechanism, e.g., an array of controllable filters or shutters, which may open or close the optical path, to or of, any single lens. The multi-lens plate thereby may create any fractional pattern of treatment macro-spots or lines that are drawn or created using any subset of lenses of the multi-lens plate. The invention is not limited to scanning laser beam splitters and envisions that other sophisticated stationary beam splitters may achieve the scanning function disclosed herein. For purposes of disclosing the invention, the term “scanner” or “scanning device” is used to refer to a scanning device in the laser emitting device as described with reference toFIG.1and to laser beam splitters, whether such beam splitters are stationary or portable. In lieu of the scanning devices described above, a semiconductor device named DLP® and manufactured by Texas Instruments may be used in coordination with the laser ofFIG.1. With a laser unit2, shown inFIG.1, a DLP® semiconductor may be used to direct laser light to one or more of the hinged-mounted microscopic mirrors and then onto the human skin. DLP® is described in an article “How DLP Technology Works” and can be found at: www.dlp.com/technology/how-dlp-works/default.aspx. Generally, the laser emitting device and the scanning device apply a laser beam to a tissue treatment area with a given emitted beam pattern such that treatment areas or “spots” and the resulting multiple microchannels are created with required or desired dimensions and are distributed throughout subsurface tissue in a required or desired pattern. The scanning device uses software designed and configured to change and to control treatment spots with respect to spot pattern, spot pattern size, spot size, spot shape, spot densities, and/or spot or ablated microchannel depth and pattern/sequence vs. randomized. In one configuration according to the invention, a laser unit with a laser emitting device includes a scanning device and software to produce and emit a laser beam during scanning that creates multiple spots and microchannels in a randomized sequence. As the scanning device moves across a treatment area, the scanning device applies a laser beam as randomized treatment spots. Movement of the scanning device controls the distribution and density of the randomized treatment spots area across the treatment area. The distribution and density of the randomized treatment spots is also controlled by the number of repetitions of scanning across a given treatment area and the extent of scanning overlap in the treatment area. In another configuration of the invention, a laser unit and a laser emitting device includes a scanning device and software to produce and emit a laser beam during scanning that creates multiple spots in a predetermined fractional pattern to thereby create microchannels along a tissue treatment area. The scanning device and software according to the invention thereby enable controlled and intuitive treatment of tissue with more or less distribution and density of treatment spots along specific areas of a total treatment area. The scanning device and software thereby permit greater flexibility and control of microablative techniques. Referring toFIG.1, in one aspect, the invention provides a system for performing microscopic ablation or partial microablation of tissue to form one or more microchannels6through a surface of tissue to effect treatment within subsurface tissue. For instance, in skin tissue, proteins such as collagen reside in the dermal layer of the skin. Microchannels6described below may be used to target and alter collagen fibers within the skin dermis as an effective treatment of, for instance, wrinkles of the skin or cellulite. In another instance, microchannels6described below may be used to target and thermally treat portions of the skin dermis to coagulation at certain depths to thereby effectively treat undesirable skin pigmentation or lesions. Alternatively, microchannels6may create a passage through which targeted tissues may be treated, and/or through which material(s) may be extracted or material(s), such as medication, may be delivered to targeted tissues. Also, microchannels6may create a passage through targeted tissues through which a second laser beam having the same or different characteristics from beams forming such microchannels6may be supplied. In some embodiments of the invention, microchannels6may produce partial lateral denaturation of proteins, e.g., within walls and/or along bottoms of microchannels. The tissue ablation system1includes a laser unit2and a laser emitting device3for ablating one or more microchannels6into tissue5for treatment. A microchannel6may include a hole, column, well, or the like created by ablating tissue5with a laser beam4which the laser emitting device3supplies. The laser emitting device3includes a scanning device30for emitting ablation radiation in a given fractional pattern of treatment “spots.” As used to disclose the invention, treatment “spot” refers to an ablated area created by laser radiation and/or a microchannel6that results from such ablation. The laser unit2may further include a controller12programmed and configured to control the laser emitting device3. The laser unit2may also include an input interface13capable of receiving input parameters from a user of the system1. The controller12may provide the laser emitting device3with a command, via one or more signals14to the laser unit2, for applying a pulse or a series of pulses to tissue5for treatment. The system1illustrated inFIG.1is a typical configuration and arrangement of a CO2laser system in which a CO2laser is included in the laser unit2, and an arm or optic fiber15delivers a laser beam4to the laser emitting device3. Alternatively, the system1may include a YAG or Erbium laser system that includes an Erbium laser that may be housed within the scanner30or a hand piece. Other laser systems with the power to form microchannels may also be utilized. With further reference toFIG.1, applying laser radiation to tissue with the laser unit2creates one or more microchannels6in subsurface tissue and may also cause tissue surrounding the microchannels6to dissipate heat resulting from the heating and evaporating of tissue that creates the microchannels6. As a result, a thermally-affected or residual heating zone7may form in surrounding walls and/or bottoms of the microchannels6. The residual heating zone7is generally indicative of damaged tissue and tissue necrosis, or, in particular, cell death. As used to disclose the invention, “damaged” means inducing cell death in one or more regions of the dermal tissue of interest, or stimulating the release of cytokines, heat shock proteins, and other wound healing factors without stimulating necrotic cell death. In addition, treatment spots or microchannels6may include exclusively one type of microchannel6or a combination of different types of microchannels6. For instance, formation of a combination of different types of microchannels6may include a first pattern of non-invasive, superficial microchannels6that do not have ablative effects, but only coagulate tissue, and a second pattern of invasive microchannels6that have ablative effects. Different types of microchannels6may be created in subsurface tissue using multiple lasers that apply laser radiation at different wavelengths in order to achieve different types of invasive and non-invasive, microchannels6. Multiple lasers may be incorporated into a common optical axis and may share the same delivery mechanism(s). Referring toFIG.2and with further reference toFIG.1, various microchannels6A,6B and6C are shown that are characterized by certain parameters including, but not limited to, microchannel diameter D and depth d. The energy and propagation characteristics of the laser beam applied to tissue5help to control the diameter D and depth d of the resulting microchannels6A,6B and6C. Such energy may be pulsed laser or continuous wave laser and its propagation characteristics may include, but are not limited to, selected wavelength, power, and laser beam profile. Laser beam profile characteristics may include, but are not limited to, pulse width, pulse duration, pulse frequency, spot size and fluency. Further, volumes and profiles of residual heating zones7surrounding ablated areas are due to laser beam characteristics including, but not limited to, selected wavelength, individual pulse energy and fluence, energy of defined sequences of pulses, pulse duration, power distribution, and laser spot shape and size. As shown inFIG.2, microchannels6A,6B and6C and residual heating zones7may vary within a single treatment session, such that, more than one type of treatment may be applied to a given tissue treatment area. For instance, a given laser beam profile may produce superficial treatment spots and microchannels6B, or may produce deep, more invasive treatment spots and microchannels6A. Another given laser beam profile may produce superficial and comparatively large, e.g., about 1.3 mm, macro treatment spots that create superficial and relatively wide microchannels6C. Superficial microchannels6B and6C typically target comparatively superficial conditions and pathologies including, for instance, skin pigmentations, pigmented lesions and the like, while comparatively deep microchannels6A typically target tissue collagen and stimulate cell growth. Combining deep and superficial treatment spots that vary with respect to spot size (diameter), spot depth, spot shape, spot density, and/or fractional pattern enables a more dynamic treatment protocol than may be achieved with a single type of microablative treatment. Further, microchannels6A,6B and6C may be created by applying laser radiation according to a random scanning sequence. Random scanning sequences may be achieved with software algorithms that configure sequential laser pulses, such that, one or more adjacent or subsequent laser pulses may be applied at a spot farthest from the spot of a prior laser pulse to define a predetermined fractional pattern of treatment spots. Sequencing of adjacent or sequential laser pulses helps allow treated tissue to cool between laser pulses. As mentioned, the laser system1and/or laser unit2may employ software to configure laser beam profiles to deliver radiation to treatment areas in predetermined fractional spot patterns to create microchannels having specific parameters, as described above, to treat particular skin conditions and pathologies. Macro-Spots and Microchannels Referring toFIGS.3A and3B, in another aspect, the invention provides a method of tissue microablation that may employ the system1and/or laser unit2described above with reference toFIGS.1and2including a CO2laser to scan tissue treatment areas with ablative radiation to create comparatively large treatment spots or “macro-spots.” Such macro-spots create shallow and relatively wide microchannels having configurations that are advantageous for scanning large tissue treatment areas. In this configuration of the system1and/or the laser unit2, the CO2laser generates laser beams having an energy distribution or intensity approximating a particular beam profile to create a predetermined multiple macro-spots42and44within a given tissue treatment area40. As shown inFIGS.3A and3B, a single macro-spot42and44results from scanning a CO2laser beam on a focal plane along the treatment area40in a circular or spiral scan pattern to create or draw a macro-spot42and44with a spiral- or coil-shaped pattern, referred to in this disclosure as a “snail-shaped” pattern. In a preferred embodiment of the invention, the CO2laser includes a beam diameter of about 120 um and operates in a continuous wave mode, irradiating a continuous scan line in a circular or spiral pattern to create or draw the snail-shaped pattern of the macro treatment spots42and44. Referring toFIG.4and with further reference toFIGS.3A and3B, macro-spots42and44are large treatment spots relative to the micro treatment spots46shown inFIG.4and the microchannels6A,6B, and6C shown inFIG.2. Such micro-spots and corresponding microchannels result from scanning treatment areas with a laser in a pulsed mode that creates, with single or multiple pulses, single micro-spots and produces arrays of separate microchannels having potentially any of the general configurations illustrated inFIG.2. The 120 um CO2laser may scan macro-spots42and44according to the invention with diameters of from about 200 um to about 2 mm, and preferably from about 700 um to about 1.4 mm. The system1and/or the laser unit2according to the invention may be configured to readily and quickly switch between a pulsed mode and a continuous mode of operation. Therefore, while drawing any continuous scan lines to create macro-spots, the system1and/or the laser unit2according to the invention can create any pattern of separate micro-spots36with any microchannel characteristics along the scan lines, as shown inFIG.3C, or between the scan lines and/or between the macro-spots42and44, as shown inFIG.3D. Referring toFIGS.5A and5Band with further reference toFIGS.3A and3B, where the method according to the invention operates the CO2laser in continuous wave mode, the characteristics of the laser beam profile applied to treatment areas to scan macro-spots42and44may be controlled and varied before and/or during scanning to affect the energy levels and fluence applied along the spiral scan line that creates the snail-shaped macro-spot42and44. Applying a particular beam profile in a continuous wave mode along the scan line can thereby result in relatively continuous or varying energy levels and fluence throughout the snail-shaped pattern. As a result of the controlled distribution of energy levels and fluence throughout the snail-shaped macro-spot42and44, the resulting microchannel configurations may be controlled and may be varied depending on the treatment protocol and/or condition or pathology being treated.FIG.5Ais a top view of the snail-shaped pattern of a macro-spot42and44that illustrates higher fluence52applied at approximately about or along a center of the treatment spot42and44in comparison to fluence applied along marginal segments53and the periphery53of the snail-shaped pattern. Higher fluence segments52of the scan pattern would create deeper ablated portions within the resulting microchannel relative to those resulting from the lower fluence53segments.FIG.5Billustrates an effective, cumulative energy distribution throughout the snail-shaped pattern along a cross-section of the macro-spot42and44shown inFIG.5Ataken at line A-A′ that represents a beam profile that may have been applied to create the macro-spot42and44and the resulting microchannel using a single-beam, single-pulse laser or continuous laser, which may have been used to create the macro-spot and respective microchannel. Referring toFIGS.6A and6B, in contrast, other macro-spots45may be formed with different distributions of energy and fluence along the snail-shaped pattern.FIG.6Ais a top view of the snail-shaped pattern of a macro-spot45that illustrates lower fluence54applied at approximately about or along a center of the treatment spot45in comparison to fluence applied along marginal and peripheral segments55of the snail-shaped pattern.FIG.6Billustrates an effective, cumulative energy distribution throughout the snail-shaped pattern along a cross-section of the macro-spot45shown inFIG.6Ataken at line B-B′ that represents a beam profile applied to create the macro-spot45and resulting microchannel. FIGS.7A and7Billustrate another configuration of the snail-shaped macro-spot47according to the invention created with intermittent scanning along the spiral scan line that draws the macro-spot47with a discontinuous snail-shaped pattern. In one configuration of the macro-spot47shown inFIG.7A, the laser energy is alternately applied and withdrawn along the spiral scan line during continuous scanning to draw the discontinuous pattern. The intermittent applications of laser energy may be applied along the scan line for identical durations throughout scanning resulting in relatively even distributions of energy along the spiral scan line, or may be applied for varied durations such that segments of the scan line to which laser energy is applied are varied in length.FIG.7Billustrates a potential cumulative energy distribution throughout the snail-shaped pattern along a cross-section of the macro-spot47shown inFIG.7Ataken at line C-C′ that represents a beam profile. The snail-shaped patterns of the macro-spots42and44shown inFIGS.5Athru7B illustrate the potential of the microablation method according to the invention to control and vary the energy levels and fluence throughout the snail-shaped macro treatment spot42and44before or during scanning to thereby create microchannels along treatment areas having required or desired parameters and configurations that may be advantageous toward optimizing a treatment protocol for a particular skin condition or pathology. While the snail-shaped macro-spots42and44described above are created with circular spiral scanning patterns, the invention is not so limited and envisions other spiral patterns are possible for creating the shaped macro-spot42and44. Referring toFIGS.8A-8D, other possible alternative scanning patterns according to the invention are illustrated that do not include a circular spiral, but may include a rectangular-shaped, triangular-shaped and other shaped spiral pattern49as shown. Those of ordinary skill in the art will appreciate and anticipate other spiral shapes and profiles are possible to create the shaped pattern of the macro-spots. With further reference toFIGS.3A and3B, the method according to the invention may control and vary the laser beam profile and scanning movement to create macro-spots spots42and44having a snail-shaped pattern with a given spread or density. As shown inFIG.3A, some configurations of macro-spots42may have a snail-shaped pattern that is dense and less open, while other configurations of macro-spots44may have a snail-shaped pattern that is less dense and more open as shown inFIG.3B. Control and variation of the spiral scanning movement of the laser beam helps to create the snail-shaped pattern with a required or desired spread or density, which is a direct result of the distance between successive snail pattern loops. In those configurations of the macro-spots42and44shown inFIGS.3A and3B, successive spiral loops are formed from a given center of the spiral scan line with a substantially consistent gradual increase in radii from the spiral scan line center to the pattern periphery, such that, distances between successive spiral loops within the pattern are substantially the same. Alternatively, successive spiral loops may be formed with gradually increasing or gradually decreasing radii from the spiral scan line center, such that, distances between successive spiral loops gradually increase or gradually decrease toward the pattern periphery. In addition, spiral loops may be formed with continuously increasing and decreasing radii from the spiral scan line center, such that, distances between spiral loops are inconsistent. The microablative methods according to the invention, as well as the system1and/or laser unit2according to the invention, thereby enable control and adjustment of the spread or density of the snail-shaped pattern of each macro-spot42and44, as well as control and adjustment of energy distributions and, in particular, energy levels and fluence applied along the spiral scan line that forms the snail-shaped pattern. The methods permit control and adjustment of these parameters prior to and/or during scanning treatments. The methods also provide flexibility in controlling and adjusting parameters of beam profiles in order effective and final cumulative beam profiles are achieved that are specific to and advantageous for treatments of particular skin and tissue conditions or pathologies. Referring toFIGS.9A and9B, cross-sections of treated tissue are shown that illustrate the macro-spot impact and the tissue effects resulting from fractional treatment patterns of macro-spots42and44according to the invention. The spread or density of the snail-shaped pattern of the macro-spots42and44may be controlled to create dense or spread-out ablation zones72and74. In addition, the density of the snail-shaped pattern of macro-spots42and44may be further controlled to affect the homogeneity of tissue ablation achieved within a given microchannel62and64. As shown inFIG.3A, spots42having a dense (compared to the pattern ofFIG.3B) snail-shaped pattern create microchannels62with a more homogeneous spot impact. In contrast, as shown inFIG.3B, spots44having a less dense or more spread out snail-shaped pattern creates microchannels64with a non-homogenous spot impact. More specifically,FIG.9Ashows the macro-spot42having a dense and less open snail-shaped pattern that creates a resulting microchannel62with a substantially homogeneous impact. The spiral loops42′ of the macro-spot42ablate areas of tissue72with a corresponding density, such that, the microchannel62includes a spot impact of substantially contiguous ablated zones72. In contrast,FIG.9Bshows the macro-spot44having a more open snail-shaped pattern that creates the resulting microchannel64with a non-homogeneous impact. The spiral loops44′ of the macro-spot44ablate areas of tissue74with a corresponding density, such that, the microchannel64includes areas of undamaged tissue50between zones of ablated tissue74. As mentioned, the spread or density of the spiral loops42′ and44′ of the macro-spots42and44controls the spot impact that results in certain configurations of the microchannels62and64at least in terms of homogeneity of ablation as shown here. In addition, the spiral loops42′ and44′ of the snail-shaped patterns42and44create, such that, one fractional pattern of impact spots or ablated zones72and74is created within another fractional pattern of multiple microchannels62and64along a treatment area. The macro-spots42and44shown inFIGS.9A and9Bare presumed to have substantially consistent distributions of energy levels and fluence along the scan lines forming the snail-shaped patterns, such that, the ablated zones72and74within a single microchannel62and64have substantially similar depths and diameters. However, as described below with reference toFIGS.12A and12B, macro-spots that have varying energy levels and fluence along the spiral scan line forming the snail-shaped pattern would form ablated zones within a single microchannel having different depths and possibly different diameters. As mentioned, relatively large macro-spots42and44are advantageous for treating large areas of tissue. The resulting microchannels62and64formed from the macro-spots42and44may be superficial, penetrating below the tissue surface to depths of from about 1 um to about 200 um, and may have the deepest points of the microchannels62and63approximately about the centers of the microchannel bottoms, depending on the energy levels and fluence applied along the spiral scan line drawing or creating the snail-shaped pattern. The sizes of the macro-spots42and44may create microchannels62and64having widths (diameters) of from about 200 um to about 2 mm. Referring toFIG.10, a portion of the microchannel64shown inFIG.9Billustrates the tissue effects resulting from microablative treatment with the macro-spot44patterns. The spot impact of the spiral loops of the macro-spot44are shown by the ablated zones74, which are formed from heating or vaporizing tissue as a result of the energy levels and fluence applied along the spiral scan line of the snail-shaped pattern. Coagulation C zones and residual heating zones R may form within tissue surrounding the ablated zones74as a result of lower energy levels and fluence received along certain depths of the subsurface tissue. The microablation treatment pattern thereby preferentially heats tissues at certain required or desired depths below the tissue surface to effect treatment, while not affecting subsurface tissue not targeted for treatment, which remains undamaged tissue U. As described above, the macro-spot44having a less dense and open spiral scan line may result in areas of undamaged tissue50throughout the microchannel64, such as between adjacent ablated zones74. The spread or density of the spiral scan line can thereby help to control and vary the ratio of damaged tissue to undamaged tissue within a given microchannel, such that, the macro-spot74can be configured to have more or less homogeneity within a microchannel. Referring toFIG.11, a cross-section of a microchannel66and spot impact in a portion of treated tissue is illustrated. The microchannel66has a homogeneous spot impact with contiguous ablated zones76and78. The ablated zones78oriented at substantially the center of the microchannel66have greater depths than those ablated zones76oriented toward the margins and periphery of the microchannel66. The patterning of depths is illustrative of a spot impact that may result from a macro-spot42and44having higher energy levels and fluence applied approximately about or along the center of the snail-shaped spot pattern in comparison to energy levels and fluence applied along marginal segments and the periphery of the pattern, as is illustrated inFIG.5A. In effect, the higher energy levels and fluence substantially about or along the center of the macro-spot42and44destroy or vaporize tissue to greater depths along or about the center of the microchannel66. Referring toFIGS.12A and12B, a cross-section of a microchannel68and a spot impact in a portion of treated tissue are illustrated. The microchannel68has a non-homogeneous spot impact with undamaged tissue50between some of the ablated zones80. The ablated zones80and82have substantially similar depths, but are either contiguous or non-contiguous with adjacent ablated zones as a result of the density or spread of the spiral scan line that forms the snail-shaped macro-spot84. As shown inFIG.12A, the macro-spot84is formed with gradually decreasing radii from the center86of the spiral scan line, such that, distances between successive spiral loops gradually decrease toward the macro-spot84periphery. The spot impact that results includes undamaged zones50of tissue between ablative zones80along the center of the microchannel80due to the larger radii and greater distances between successive spiral loops emanating from the spiral scan line center86. The microchannel80also includes contiguous ablative zones82along the margins and periphery of the microchannel68. The microchannels66and68illustrated inFIG.11andFIG.12B, respectively, illustrate only a few of a wide variety of possible configurations of microchannels that may result from variations in the spread and density of the spiral scan line of the snail-shaped macro-spot and from variations in the distribution of energy levels and fluence applied along the spiral scan line. In other configurations of the microablative methods according to the invention, and the system1and/or laser unit2according to the invention, the CO2laser and the scanning device30may be configured additionally for deep fractional microablative treatments by which deep microchannels6A, such as shown inFIG.2, are created having depths and diameters of, for instance, up to about 1000 um and 120 um, respectively. In this configuration, the CO2laser and emitting device3may apply ablative radiation to treatment areas with two or more laser beam profiles, such that, micro-spot patterns and resulting arrays of deep microchannels6A are combined with macro-spot42and44patterns and resulting large, superficial microchannels62and64to form a microablative pattern. Micro-spot and macro-spot patterns may be so combined in an unlimited manner. In addition, respective densities of the spot patterns may be controlled, and may be applied along treatment areas in random, overlapping or other patterns. Software of the system1and/or laser unit2controls and designs the laser beam profiles by manipulating, for instance, beam power, to create arrays of single, deep microchannels6A,6B and6C and patterns of homogeneous or non-homogeneous large, superficial microchannels62and64to achieve variable ablation depths and diameters and to thereby more precisely control treatment of subsurface tissue. Such flexibility in combining different laser beam profiles to produce two or more types of microchannels provides for customized beam profiles and thereby optimized microablative treatment protocols for a particular condition and pathology, as well as improved results per treatment session. In one configuration, the method according to the invention initially scans a treatment area in a pulsed mode to form patterns of micro-spots with a given spot size, e.g., 120 um, to create an array of deep microchannels6A while controlling the density of the spot patterns. Secondarily the method scans the same treatment area in a continuous wave mode to form patterns of macro-spots42and44with a given spot size, e.g., 700 um, to create a pattern of large, superficial microchannels62and64while controlling the density of the spot patterns. In another embodiment, this can be done simultaneously by a fast switching between pulsed mode and continuous mode so that in a single run the laser can embed microchannels in various desired locations while drilling a macrochannel. The combinations of micro and macro treatments spots, such as, for example, shown inFIGS.3C and3D, are unlimited and provide flexibility within in single CO2system in terms of control and adjustment of spot size, density, energy distribution, and other parameters discussed above. Microablative treatment patterns thereby may be readily controlled and adjusted in response to treatment demands. Ablative Methods to Maintain Microchannels Open Referring again toFIG.1, current methods of microablation of tissue5often experience problems associated with the ability of microchannels6to retain their initial diameter (D) and/or depth (d) that result from application of ablation radiation to the surface of tissue. Microchannels6have a tendency to collapse mechanically and to fill with fluid. One solution to this problem is to freeze at least a portion of the tissue of the treatment area prior to applying ablation radiation. Freezing tissue helps tissue become relatively stiff and helps to block the flow of fluids into the microchannels. In one aspect, the invention provides a method of patterning microchannels created in a treatment area and forming microchannels with different diameters and depths to achieve different functions within the microchannels and the surrounding tissue. The patterning of microchannels, and the differences between microchannels with respect to depth and diameter, help to achieve certain thermal effects and help to advantageously shrink and dry certain microchannels and associated surrounding tissues. Referring again toFIG.2, and with further reference toFIG.1, the method of the invention ablates a treatment area5with laser radiation to create deep microchannels6A and relatively more shallow or superficial microchannels6B. The depth (d) and diameter (D) parameters of the microchannels6A and6B are controlled by the energy characteristics of the applied laser radiation. The deep microchannels6A include a zone of ablation6having a certain depth (d) and diameter (D) and a zone of thermal damage7to the dermal tissue, e.g., “lethal damage” or “sublethal damage,” resulting from the laser radiation. The relatively more shallow or superficial microchannels6B have a certain depth (d) and diameter (D) to create a zone of coagulation7only within which no ablation occurs. Rather, the zone7experiences tissue coagulation that helps to shrink and to dry the superficial microchannel6B and its surrounding tissue. The invention is not limited to laser radiation and envisions that the method may employ coherent, non-ablative light in one or more different modalities, such as, for instance, a combination of treatment that may use one or more of RF, US, IPL or other coherent light. Referring toFIG.13A, and with further reference toFIG.2, the combination of deep and superficial microchannels160A and160B is created in the treatment area5in a pattern160whereby the deep microchannel160A is surrounded by multiple superficial microchannels160B, which may be referred to as a “flower pattern,” wherein the deep microchannel160A defines the flower center or stem and the multiple superficial microchannels160B surround the deep microchannel160B like “petals.” As shown inFIG.13A, a single deep microchannel160A is surrounded by four superficial microchannels160B. The invention is not limited in this respect and envisions that any number of superficial microchannels160B may surround the deep microchannel160A. In addition, the ratio of deep to superficial microchannels160A and160B may be varied. Further, the invention is not limited to the pattern160illustrated inFIG.13Aand anticipates that other configurations or patterns of deep microchannels160A and superficial microchannels160B are possible to achieve the functions of the patterning, as described in further detail below. As a result of ablating the treatment area150with the pattern160of deep and superficial microchannels160A and160B, the coagulation effects resulting from ablation or formation of the superficial microchannels160B help to shrink and to dehydrate the microchannel160B and the surrounding tissue within the coagulation zone7ofFIG.1. The coagulation and drying of such surrounding tissue further helps to prevent flow of fluids into the microchannels160A and160B. Because of shrinking and drying of tissue within the coagulation zone7, the superficial microchannel160B and coagulation zone7stiffen and thereby serve as mechanical support to the adjacent deep microchannel160A. The mechanical support that the stiffened superficial microchannels160B and surrounding zones7lend to the deep microchannel160A helps to prevent mechanical collapse of the deep microchannel160A. The surrounding microchannels160B and coagulation zones7thereby help the deep microchannel160A remain open and relatively dry for a sufficient period of time after ablation to help to enable treatment and to help to enhance the effectiveness of such treatment. Referring toFIG.13B, a cross-sectional illustration shows a microchannel160C with coagulation areas or zones7A and7B formed along portions of walls of the microchannel160C. Coagulation zones7A and7B may be formed during ablation that forms the microchannel160C in a treatment area. Application of irradiation energy configured in accordance with one or more parameters applies to the skin or tissue of the treatment area and forms the microchannel160C to an initial approximate desired or required depth; thereafter, irradiation energy applied to the treatment area may be altered or modified in accordance with one or more other or different parameters, such that, as a result, irradiation energy forms coagulation zones7A, e.g., at or proximate to the initial approximate depth achieved, along portions of walls of the microchannel160C as shown inFIG.13B. Ablation may continue by irradiating energy configured in accordance with one or more parameters to continue formation of the microchannel160C to a subsequent approximate depth that is relatively deeper than the initial approximate depth achieved. Irradiation energy configured with one or more other or different parameters may be applied that forms coagulation zones7B, e.g., at or proximate to the subsequent approximate depth achieved, along portions of walls of the microchannel160C. As shown inFIG.13B, the coagulation zones7A and7B are defined at different depths of the microchannel160C. The coagulation zones7A and7B along the microchannel160C walls help to keep the microchannel160C open once formed and help to prevent or at least minimize mechanical collapse of the microchannel160C, thereby helping to provide mechanical stability to the microchannel160C. Referring toFIG.14, the pattern of microchannels shown inFIG.13may include a pattern161A whereby superficial microchannels166B closely abut or are proximate to a deep microchannel166A. Referring toFIG.15, a schematic cross-sectional view illustrates an alternative configuration of the microchannels160B ofFIG.13. In the configuration ofFIG.15, the microchannels170A and170B may define relatively shallow coagulation zones or holes that provide non-invasive, fractional treatment without creating the “microchannels”160B ofFIG.13. For instance, the depth of such coagulation zones or holes may vary from about zero to about one-third a depth (d1) of a corresponding deep microchannel172. Creating shallow coagulation zones or holes causes the thermally-affected tissue surrounding the zones or holes to stiffen. The shallow coagulation zones or holes also may serve as buffers or reservoirs to help collection of fluid before fluid flows into a deep microchannel172. Ultrasonic and Pressurized Systems to Maintain Open Microchannels In another aspect, the apparatus includes a first energy application device to direct energy at tissue of a patient to cause at least one channel to be formed, a second energy application device to direct energy at the tissue of the patient to prevent the at least one channel from substantially closing, and a controller to control application of energy from the first energy application device to form the at least one channel, and control application of energy from the second energy application device to the at least one channel to prevent the at least one channel from substantially closing for at least a pre-determined interval of time. Embodiments of the apparatus may include one or more of the following features. The second energy application device may include a controllable energy application device to generate one or more standing waves over the at least one channel to elevate the Young modulus of the tissue. The at least one channel may include a plurality of channels, and the controllable energy application device to generate the one or more standing waves may include a controllable energy application device to generate one or more standing waves having wavelengths based on a distance between at least two of the plurality of channels. The second energy application device may include a fluid source, and a pump to pump pressurized fluid from the fluid source towards the at least one channel. The pump may further be configured to create a vacuum external to the at least one channel to remove at least some of the fluid that was directed into the at least one channel. The fluid of the fluid source may include one or more of, for example, gas, enhancing fluid to enhance the effect of laser energy transmitted through the pressurized enhancing fluid, and/or medicinal fluid. The second energy application device may include a controllable ultrasound device to apply ultrasound energy in a direction parallel to a longitudinal axis of the at least one channel to generate standing waves of varying amplitude to cause varying elasticity levels of the tissue. In another aspect, a method is disclosed. The method includes forming at least one channel in a tissue of a patient, and applying energy to the at least one channel to prevent the at least one channel from substantially closing for at least a pre-determined interval of time. Embodiments of the method may include any one of the features described above in relation to the apparatus, as well as one or more of the following features. Applying the energy may include generating one or more standing waves over the at least one channel to elevate the Young modulus of the tissue. The at least one channel may include a plurality of channels, and generating the one or more standing waves may include generating one or more standing waves having wavelengths based on a distance between at least two of the plurality of channels. The one or more standing waves may include troughs located approximately at a halfway point between the at least two of the plurality of channels. Generating the one or more standing waves having wavelengths based on the distance between at least two of the plurality of channels may include generating one or more standing waves having wavelengths equal to an integer multiple, n, of the distance between the at least two of the plurality of channels. Generating the one or more standing waves may include generating one or more ultrasound standing waves. Applying the energy may include applying ultrasound energy in a direction parallel to a longitudinal axis of the at least one channel to generate standing waves of varying amplitude to cause varying elasticity levels of the tissue. Applying the energy may include directing pressurized fluid into the at least one channel. The pressurized fluid may include one or more of, for example, pressurized gas, pressurized enhancing fluid to enhance the effect of laser energy transmitted through the pressurized enhancing fluid, and/or pressurized medicinal fluid. Directing the pressurized fluid may include directing the pressurized fluid at a pre-determined time interval following the application of energy to form the at least one channel. The method may further include removing at least some of the fluid occupying the at least one channel by creating a vacuum externally to the at least one channel. Forming the at least one channel may include forming at least one channel having pre-determined dimensions in the tissue, and a respective thermally affected thermal zone having a pre-determined configuration profile, the thermal zone extending away from the at least one channel. Disclosed herein are apparatus, systems, methods and devices, including an apparatus for treating tissue that includes a first energy application device to direct energy at tissue of a patient to cause at least one channel to be formed, a second energy application device to direct energy at the tissue of the patient to prevent the at least one channel from substantially closing, and a controller to control application of energy from the first energy application device to form the at least one channel, and control application of energy from the second energy application device to the at least one channel to prevent the at least one channel from substantially closing for at least a pre-determined interval of time. In some embodiments, the second energy application device may include a controllable ultrasound device to apply ultrasound energy in a direction parallel to a longitudinal axis of the at least one channel to generate standing waves of varying amplitude to cause varying elasticity levels of the tissue. In some embodiments, the second energy application device may include a fluid source, and a pump to provide pressurized fluid from the fluid source towards the at least one channel. Hole (or channel) formation in the tissue of a person may be performed, in some embodiments, through microablation procedures by, for example, applying electromagnetic radiation to the tissue for ablating a channel therein having a (predetermined) width and predetermined depth. In some embodiments, the procedure includes non-ablatively heating tissue on the bottom of the channel with electromagnetic radiation and creating a thermal affected zone of predetermined volume proximate said channel. Suitable radiation generating devices that may be used in forming microchannels through microablation include, for example, a CO2 laser device, an Er:YAG laser device, a Tm:YAG laser device, a Tm fiber laser devices, an Er fiber laser device, a Ho fiber laser device, and/or other types of laser devices. Other types of radiation or energy sources may also be used. A schematic diagram of an apparatus to perform microablation to form microchannels is provided inFIG.16. Briefly, the apparatus depicted inFIG.16may include a laser unit200and a laser emitting device203for ablating a microchannel206into a tissue205, for example, for applying a treatment thereto. The microchannel206may be, e.g., a column, a well, a hole, or the like, created in the tissue205by ablating the tissue205by the laser emitting device203and the laser beam204. Microablation of the tissue205may result in ablation of the microchannel. Microablation of the tissue may also result in dissipation of heat from the heated and evaporated tissue by the tissue surrounding the resultant microchannel206. Thus, ablation of the tissue205, producing the microchannel206, may result in a thermal affected zone207surrounding the walls and/or bottom of the microchannel206. In some embodiments, hole stabilization mechanisms may be based on use of an ultrasound device208with the laser emitting device203. The ultrasound generator—208generates standing waves along the skin's plane, which is perpendicular to the main axis of the holes, in order to elevate the effective Young Modulus of the tissue and make it more rigid. The more rigid the tissue around the holes is, the less it tends to collapse and block the hole. A standing wave creates “stationary” crests and troughs. The distance between them is proportioned to the wavelength. Assuming a certain hole's distribution (distance between holes), one can choose a certain wavelength/s that localize/s these crests and troughs on the holes or in between the holes. One option would be to use a wavelength which is equal to the distance between the holes and to apply the ultrasound in such a relative geometry that the crests will be in the middle between holes. Ultrasound energy may be generated, in some embodiments, using an ultrasound generator, such as the ultrasound generator208depicted inFIG.16. In some implementations, the generator208may be a contact generator, in which the generator is mechanically coupled to the tissue (e.g., via a coupling layer such as a suitable fluid couplant), and causes resultant waves (acoustic waves) through mechanical excitation. Suitable contact-based generators may include, for example, an ultrasonic wheel generator (i.e., a moveable generator displaced over the object), an ultrasonic sled generator, and/or a water-coupled generator. These types of generators may include an ultrasonic transducer implemented, for example, using a piezoelectric element, or some other vibrating transducer, that mechanically oscillates at frequencies controllable by regulating the voltage/current applied to the piezoelectric element. In some implementations, the generator208may be a non-contact generator, i.e., the generator is not in direct mechanical contact with the object to be inspected. A suitable non-contact generator may be an air-coupled transducer that includes a mechanical vibrating transducer (e.g., such as a piezoelectric element) that can controllably oscillate to produce the ultrasonic waves applied to the object. The output port of such a generator is placed proximate to the object (e.g., the tissue), and emitted ultrasonic wave are directed to the object at the application point via an air barrier separating the output port of the generator and the object. Other types and/or implementations of generators to cause waves (ultrasonic waves or other types of waves) may also be used. In some embodiments, another implementation for hole stabilization is to use using any wavelength with an integer ratio to the distance between holes. Such an implementation can be done on symmetric hole pattern (matrix) of statistically on a randomized holes distribution. In some embodiments, hole stabilization can be achieved by a “pushing” mechanism. Specifically, low amplitude high resolution ultrasound is used today with femtosecond lasers to displace bubbles during the treatment of human eye lenses. Using ultrasound for transdermal drug delivery is also known. A similar mechanism may thus be used to push material through the holes once they are open. This requires an ultrasound application (e.g., substantially simultaneously) along the hole's main axis perpendicular to the skin surface. In some embodiments, application of ultrasound energy may be used to help material, like fat, which is ablated at the bottom of the hole, to be evacuated through the hole (or channel). To perform such material evacuation, vibrations along the hole's walls are caused. One way to do that is by changing the amplitude of the standing waves. Under the assumption that a standing wave will change the tissue elasticity, then a “pulsating” elasticity (slightly changed elasticity) will result in small movements of the hole's wall. This will help the material being evacuated to travel in either direction, e.g., in and out. If a certain pressure gradient can also be applied by external vacuum, skin stretching, or traveling waves along the hole's wall, then one can control the direction and enhance the evacuation of material from the bottom of the hole. In some implementations, channel stabilization may be achieved by using a pressurized fluid, e.g., gas or liquid, to keep open the holes created by, for example, a CO2 fractional laser in order to allow a second “shot” with the bottom of the hole still open. Such implementations include a mechanism comprising an adapter300that fits on the end portion of the laser302as illustrated inFIGS.17A and17B. In such implementations, a vacuum tube304with sourced vacuum306is attached to the adapter300, and a high pressure pump308and the310coupled to the adapter (e.g., at its other end) introduce a fluid into the adapter, for example, just prior to activation of the laser. As illustrated inFIG.17B, tube304and310which carry vacuum and pressurized fluid(s) may have a plurality of ports within the adapter to allow rapid introduction and evacuation of fluids. In some embodiments, the fluid could be a material which enhances the ability of the first laser firing to achieve its desired depth and includes medicinal and/or anesthetic substances. In operation, the adapter300is placed in contact with the skin305as shown inFIG.17Aand pressure applied. A pre-trigger mechanism forces pressure and fluid into the adapter, and then the laser302is then fired. The fluid migrates into the hole206waiting for the second firing (or other treatment). Then the adapter can be removed or even the vacuum pump activated to remove the fluid into the adapter's tube. Instead of a separate vacuum and pressure source, a single mechanism to perform both functions, such as a reversible pump, may be used. The foregoing pressurized system may be used instead of the application of ultrasound energy or together with the application of ultrasonic energy. An additional advantage is that use of the pressure should also serve to reduce pain to the patient. Under the “Gate Theory” of pain management, if the skin is put under pressure (e.g., vacuum or positive pressure on my part), the brain is tricked into feeling the pressure and not the pain of the holes being drilled into one's skin (this is predicated on a concept similar to that implemented in the commercially-available ShotBlocker® device which is a pressure plate placed around an injection site). When used the pressure on the skin makes the patient “forget” about the injection pain. Control of Laser Treatment Spots Referring toFIG.18, the laser unit ofFIG.1, for example, may deliver the laser beam in a first predetermined pattern332of treatment spots or in a second predetermined pattern of treatment spots334. Alternatively, the laser unit may modify the laser beam during the course of a single treatment to deliver both the first and the second predetermined patterns332and334of treatment spots producing an area of overlaid patterns336along the surface of the tissue. The scanner30ofFIG.1and software according to the invention enables the laser emitting device3ofFIG.1to deliver a laser beam to the surface of tissue in one or more predetermined patterns of treatment spots, as described with reference toFIG.18, while randomizing the sequence of treatment spots applied to the tissue surface. The treatment spots are randomized across a given treatment area because of the movement of the scanner, as shown by arrow40inFIG.1, across the treatment area. While the laser emitting device3emits the laser beam, the movement of the scanner across the treatment area in effect randomizes or “spreads” the predetermined pattern across the treatment area. As a result, the density and distribution of the treatment spots in the given area are random. The scanner30may be moved repeatedly across the given treatment area such that an overlap of treatment spots is produced which thereby results in greater spot density and distribution. In addition, the movement of the scanner30permits treatment of a relatively large treatment area and effectively scans or “brushes” the tissue surface with treatment spots. Repetitive scans or brushes results in varying densities and distribution of treatment spots across the given treatment area that is a function of the number of brushes and the overlap between each brush across the treatment area. Referring toFIG.19, a facial image illustrates multiple treatment spots338randomly distributed across a treatment area with varying spot densities at certain areas within the treatment area. As shown inFIG.19, by way of example, the density of treatment spots may be greater in the middle section of the forehead, an area typically in which wrinkles may be present. However, density treatment may be varied from that shown inFIG.19according to a particular patient's needs. Random distribution and varying density of treatment spots338results, as mentioned, from the scanner30moving across the treatment area to deliver multiple scans or brushes as well as overlaying scans or brushes. The scanner and software according to the invention thereby enable greater control of treatment spots in terms of distribution and density of treatment spots. An operator, such as a physician, may thereby distribute or “spread” treatment spots in a controlled and intuitive manner whereby the operator would scan a particular area of surface tissue with greater density, but scan another area with less density, depending upon the tissue and the treatment desired. For instance, certain areas may be scanned or brushed repeatedly due to different skin characteristics in terms of pigmentation, elasticity, distance to bones, etc. Other areas may receive less treatment and, therefore, have less spot density and/or have a gradual decrease or phasing out of spot density, such as along the boundaries between treatment areas and the eyes, lips, and hair. In one embodiment, instead of the treatment spots being all of either type,6A or6B as inFIG.2, the treatment spots may be mixed and matched so that a user-selectable proportion of6A type and6B type treatment spots are delivered to the patient's skin. For example, the treatment spots may be a mixture to form a plurality of spots160as shown inFIG.13and their relative spacing to one another controlled by the physician. In addition, the scanner may incorporate speed-sensing or distance-sensing technology so that the software can deliver predetermined density of spots to an area of the patient's skin, irrespective of the speed with which the physician moves the scanner over the patient's skin. Also, under control of the physician, the scanner's software may provide treatment spots like theFIG.2type, but in some areas of the patient's skin only and may provideFIG.2type6B in other areas of the skin, depending on the patient's skin characteristics such as skin elasticity, pigmentation, closeness to hairlines or the eyes, etc. The foregoing skin treatment is in context to the known “step and shoot” treatment in which the scanner is placed over a spot of skin, then laser activated and then the scanner is moved to the next adjacent untreated area of the patient's skin. The somewhat random scanning sequences described above may also assist in lowering overall patient pain as the scanner moves when firing the laser, then spreading the treatment spots of a broader area than with the traditional “step and shoot” method. The software may program the scanner to disallow two consecutive firings at predetermined distances from one another. In another embodiment of the invention, the software the scanner30employs to define the laser beam profile controls the scanning speed or speed of delivery of the treatment beam with respect to the speed with which a physician scans or brushes the treatment area. In one embodiment of the invention, the software correlates the scanning speed to the speed of the movement that the physician uses to scan or brush the treatment area. Correlating scan speed and speed of movement of the scanner helps to ensure application of a certain homogeneous distribution of treatment spots irrespective of the speed the physician uses to scan or brush the tissue surface. In another embodiment, the scanner and software according to the invention are configured to apply two or more predetermined patterns of treatment spots, such as shown inFIG.18. As a result, a dynamic distribution of different treatment spots having different tissue effects, as can be seen in the depth D of the microchannels6A and6B ofFIG.2, is created in the dermal layer. The software according to the invention allows selection and control of the different types of treatment spots or microchannels6A and6B. Such selection and control are achieved with at least the selection and control of the pulse width, the energy fluence, the pulse repetition rate, and any combination of these parameters, to create different treatment spots and to enable the scanner to emit laser energy that creates different treatment spots in a given treatment area. In addition, the software according to the invention will enable the selection and control of the ratio of two of more different treatment spots that are applied to the given treatment area. FIGS.2and18illustrate two different types of spots or microchannels6A and6B and two different predetermined patterns of their application. The scanner30and software according to the invention may create these different predetermined patterns in a randomized sequence to produce a varying distribution and density of treatment spots within a treatment area. The invention, however, is not limited in this respect and envisions the software will permit the selection and control of a number of different types of spots or microchannels and any of a variety of spot patterns. The software according to the invention enables the scanner30to achieve multi-levels of penetration of the dermal layer. This enables a physician to tailor and to customize the microablation treatment in accordance with a patient's skin pathologies and pigmentation and to deliver optimal and highly customized microablation to a single treatment area. In a further embodiment of the invention, the scanner30and software according to the invention permits the selection and control of predetermined patterns of treatment spots that are not homogenous. For instance, a pattern may produce a high density of treatment spots at and proximate to a center of the pattern, while producing a relatively low density of treatment spots at the periphery of the pattern. Combining capabilities of selection and control of different non-homogeneous treatment patterns and their densities and distributions in a given treatment area, the invention provides a physician with an ability to treat different skin characteristics simultaneously, a capability to vary depths of ablation, and a technique to accommodate the boundaries between treatment and non-treatment areas, such as eyes, lips, and hair. The software in effect allows repeated scanning or brushing, while applying precisely required or desired treatment spot densities. Description of Foot Activated Control Referring toFIG.20, in one aspect, the invention provides a foot-activated control (entitled herein a footswitch)410that is constructed and arranged for use in controlling and, more particularly, in actuating a light-based system or device. The footswitch410includes at least one electrical cable413to couple the footswitch410operatively to the light-based system or device. Such light-based system or device is configured for emission of laser and/or other coherent light applied in accordance with ablation methods to the surface of tissue for various treatments. The footswitch410includes a pedal412having, in one configuration, a substantially planar surface and sufficient area412A to receive at least a portion of an operator's foot. The pedal412is actuated or activated, e.g., depressed, by the operator's foot on the surface412A. In this manner, the footswitch410serves as an accelerator to increase or to decrease the firing of the light-based system or device, such that, the system or device increases or decreases, e.g., the duration of, the emission ablation treatment radiation. For example, the footswitch410may be useful in connection with controlling the density and depth of treatment spots338inFIG.19. In one configuration of the invention, the footswitch410is constructed and arranged as a “smart” pedal412that provides a dynamic range of control of one or more parameters of the tissue ablation treatment, including, but not limited to, repetition rate, light energy, light penetration, light depth, treatment spot size, spot density, repetition rate, etc. Each parameter may be associated with a sensor414A,414B,414C, and414D that is integrated with the footswitch410and, for instance, is disposed below an outer sheath covering the surface412A of the pedal412(as shown in dashed lines inFIG.20). The operator may thereby control dynamically, during treatment, one or more parameters by actuating with their foot one or more sensors414A,414B,414C, and414D, alone or in any combination.FIG.20shows four sensors and a particular arrangement of the sensors414A,414B,414C, and414D on the pedal412. The invention, however, is not limited in this respect and envisions that any number of sensors may be incorporated with the pedal12and in any of a variety of configurations and arrangements. Referring toFIG.21and with further reference toFIG.20, the footswitch410may be operatively coupled with a user interface416that enables the operator to select various modes of operation of and parameters for actuation by the footswitch410. The interface416may include a visual display417of the modes417A and the parameters417B that the footswitch410may control. Such modes and parameters417A and417B may be selected and activated for control by the footswitch410by, for instance, touch-screen software. In one configuration, the interface416may be incorporated with the light-based system or device to which the footswitch410is coupled operatively. Alternatively, or additionally, the interface416may be a peripheral device that is configured to operate alone or in conjunction with a controller, which is operatively coupled with the light-based system or device. The invention further includes any software, hardware, and firmware, and associated electronics, that are required to operate and to provide control of the footswitch410, the sensors414A,414B,414C, and414D, and the interface16, and that are required to integrate the footswitch10and the interface16with a light-based system or device and/or a controller. Having thus described at least one illustrative aspect of the invention, various alterations, modifications and improvements will readily occur to those skilled in the art. Such alterations, modifications and improvements are intended to be within the scope and spirit of the invention. Accordingly, the foregoing description is by way of example only and is not intended as limiting. The invention's limit is defined only in the following claims and the equivalents thereto.
66,691
11857811
DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS The following description of certain embodiments presents various descriptions of specific embodiments. However, the innovations described herein can be embodied in a multitude of different ways, for example, as defined and covered by the claims. In this description, reference is made to the drawings where like reference numerals can indicate identical or functionally similar elements. It will be understood that elements illustrated in the figures are not necessarily drawn to scale. Moreover, it will be understood that certain embodiments can include more elements than illustrated in a drawing and/or a subset of the elements illustrated in a drawing. Further, some embodiments can incorporate any suitable combination of features from two or more drawings. Headings provided herein are for convenience only and do not necessarily affect the scope or meaning of the claims. For detection of epileptic seizures, non-invasive technology includes an electroencephalogram (EEG), functional magnetic resonance imaging (MRI)/computed tomography (CT) scans, and magnetoencephalography (MEG). For proper diagnosis or detection of epilepsy, both high temporal resolution and spatial resolution can be desired. EEG has a poor spatial resolution. Functional MRI and CT can be used for detection of epileptic events. They provide good spatial resolution. However, they have poor temporal resolution. Moreover, they are expensive and not portable. Despite limited spatial resolution, EEG continues to be a valuable tool for research and diagnosis. It is one of the few mobile techniques available and offers millisecond-range temporal resolution which is not currently possible with CT, positron emission tomography (PET), or MRI. Some treatments for epilepsy include surgery where either neurostimulator electrodes are implanted in the brain or a section of the brain is removed. Aspects of this disclosure relate to detection, localization, and/or suppression of an epileptic seizure and/or other neural activity in the brain. An acoustic wave from a seizure can be detected using acoustic transducers and/or any other suitable sensor. An acoustic transducer that can generate and/or detect an acoustic signal having a frequency of at least 20 kilohertz (kHz) can be referred to as an ultrasonic transducer. The coupling to a skull can provide a unique texture into the acoustic waves. This can enable localization of the source of the seizure within a few millimeters (mm) even though the wave length of a pressure wave is in the meter range. The disclosed technology can provide good temporal resolution and spatial resolution in detecting a seizure and/or other neural activity in the brain. In response to detecting a seizure at a particular location, ultrasound energy can be applied to the particular location to suppress action potential firings to thereby blunt a seizure. In embodiments disclosed herein, sensors (e.g., an array of acoustic transducers and/or accelerometers) are positioned in a helmet over the head of a person. The array of sensors can be arranged to detect an epileptic event at relatively low frequencies (e.g., in the kHz range). The epileptic event can be localized with millimeter resolution. In response to detecting and localizing the epileptic event, ultrasonic transducers can suppress the epileptic event by applying ultrasound energy at relatively high frequencies (e.g., 100 s of kHz) using a method to beamform a transmitted pressure at the location of the event. The technology disclosed herein can achieve good spatial resolution and good temporal resolution for detecting neural activity in a brain. The disclosed techniques for detection, localization, and/or suppression of epileptic seizure are non-invasive. Accordingly, the disclosed technology provides a non-invasive treatment that can suppress a seizure in time so that severe consequences of the seizure can be reduced, minimized, and/or eliminated. Epilepsy Epilepsy is a group of neurological disorders characterized by epileptic seizures. Epileptic seizures are episodes that can vary from brief and nearly undetectable periods to relatively long periods of vigorous shaking. These episodes can result in physical injuries, including occasionally broken bones. In epilepsy, seizures tend to recur and have no immediate underlying cause. The cause of most cases of epilepsy is unknown. Some cases occur as the result of brain injury, stroke, brain tumors, infections of the brain, and/or birth defects through a process known as epileptogenesis. Epileptic seizures are thought to be the result of excessive and abnormal neuronal activity in the cortex of the brain. The diagnosis typically involves ruling out other conditions that might cause similar symptoms, such as fainting, and determining if another cause of seizures is present, such as alcohol withdrawal or electrolyte problems. This may be partly done by imaging the brain and performing blood tests. Epilepsy can often be confirmed with an electroencephalogram (EEG). As of 2015, about 39 million people were thought to have epilepsy. Nearly 80% of cases of epilepsy occur in the developing world. In 2015, epilepsy resulted in 125,000 deaths up from 112,000 deaths in 1990. Epilepsy is more common in older people. In the developed world, the onset of new cases occurs most frequently in babies and the elderly. In the developing world, onset is more common in older children and young adults, due to differences in the frequency of the underlying causes. About 5-10% of people are estimated to have an unprovoked seizure by the age of 80, and the chance of experiencing a second seizure is thought to be between 40 and 50%. In many areas of the world, those with epilepsy either have restrictions placed on their ability to drive or are not permitted to drive until they are free of seizures for a specific length of time. The diagnosis of epilepsy is typically made based on observation of the seizure onset and the underlying cause. An EEG to look for abnormal patterns of brain waves and neuroimaging (CT scan and/or MRI) to look at the structure of the brain can also be part of the workup. While figuring out a specific epileptic syndrome is often attempted, it is not always possible. Video and EEG monitoring may be useful in difficult cases. An EEG can assist in showing brain activity suggestive of an increased risk of seizures. It is typically only recommended for those who are likely to have had an epileptic seizure on the basis of symptoms. In the diagnosis of epilepsy, electroencephalography may help distinguish the type of seizure or syndrome present. Diagnostic imaging by CT scan and MRI is typically recommended after a first non-febrile seizure to detect structural problems in and/or around the brain. MRI is generally a better imaging test except when bleeding is suspected, for which CT is more sensitive and more easily available. If someone attends the emergency room with a seizure but returns to normal quickly, imaging tests may be done at a later point. Wristbands and/or bracelets denoting their condition are occasionally worn by epileptics should they need medical assistance. Epilepsy can be treated with daily medication once a second seizure has occurred, while medication may be started after the first seizure in those at high risk for subsequent seizures. Diet, alternative medicine, and people's self-management of their condition (such as avoidance therapy consisting of minimizing or eliminating triggers) may be useful. In drug-resistant cases or cases experiencing severe side effects, different and harsher management options may be considered including the implantation of a neurostimulator or neurosurgery. Epilepsy surgery may be an option for people with focal seizures that remain a problem despite other treatments. These other treatments typically include at least a trial of two or three medications. The goal of surgery is total control of seizures and this may be achieved in about 60-70% of cases. Common procedures include cutting out the hippocampus via an anterior temporal lobe resection, removal of tumors, and removing parts of the neocortex. Some procedures such as a corpus colostomy are attempted in an effort to decrease the number of seizures rather than cure the condition. Following surgery, medications may be slowly withdrawn in many cases. Neurostimulation may be another option in those who are not candidates for surgery. Three types of neurostimulation have been shown to be effective in those who do not respond to medications: vagus nerve stimulation, anterior thalamic stimulation, and closed-loop responsive stimulation. Epilepsy cannot usually be cured, unless surgery is performed. However, the outcome of surgery can lead to unexpected harsh outcomes such as loss of functionality of certain abilities such as speech, control over movements, etc. In the developing world, 75% of people are thought to be either untreated or not appropriately treated for epilepsy. In Africa, it is estimated that 90% of people with epilepsy do not get treatment. This is partly related to appropriate medications not being available and/or being too expensive. People with epilepsy are at an increased risk of death. This increase is between 1.6 and 4.1-fold greater than that of the general population and is often related to: the underlying cause of the seizures, status epilepticus, suicide, trauma, and sudden unexpected death in epilepsy (SUDEP). Death from status epilepticus is primarily due to an underlying problem rather than missing doses of medications. The risk of suicide is between two and six times higher in those with epilepsy. The cause of this is unclear. The greatest increase in mortality from epilepsy is among the elderly. Those with epilepsy due to an unknown cause have little increased risk. In the developing world, many deaths are due to untreated epilepsy leading to falls or status epilepticus. Electroencephalography (EEG) is an electrophysiological monitoring method to record electrical activity of the brain. It is typically noninvasive, with the electrodes placed along the scalp, although invasive electrodes are sometimes used such as in electrocorticography. EEG measures voltage fluctuations resulting from ionic current within the neurons of the brain. EEG is most often used to diagnose epilepsy, which causes abnormalities in EEG readings. EEG has a poor spatial resolution. Often for proper diagnosis and/or detection of epilepsy both high temporal resolution and spatial resolution are desired. Functional magnetic resonance imaging (MRI) and computed tomography (CT) can be used for detection of epileptic events. They provide good spatial resolution. However, they have poor temporal resolution. Moreover, they are expensive and not portable. Despite limited spatial resolution, EEG continues to be a valuable tool for research and diagnosis. It is one of the few mobile techniques available and offers millisecond-range temporal resolution which is not possible with CT, PET or MRI. Non-Invasive Epilepsy Treatment The disclosed technology relates to a dual mode in the sense it detects and localizes the epileptic event in the order of milliseconds before the neural activity should lead to a seizure, and then it uses this information to focus ultrasound waves and suppress neuronal firings. The technology disclosed herein relates to detection of neural activity in a brain using acoustic waves, localization of the neural activity in the brain, and suppression of the neural activity in the brain in response to the detection and localization of the neural activity. FIG.1is a flow diagram of a method10of detecting, localizing and suppressing neural activity in the brain according to an embodiment. The method10includes detecting neural activity in a brain using acoustic waves at block12. The neural activity can correspond to a seizure. More detail regarding detecting neural activity will be provided with reference toFIG.2. The method10includes localizing a source of the detected neural activity in the brain at block14. The localization can be performed with resolution on the order of millimeters. The localization can involve applying machine learning techniques. More detail regarding localizing neural activity will be provided with reference toFIG.3and other figures. The method10also includes applying ultrasound energy to a source of the detected neural activity in the brain at block16. This can suppress the neural activity. For example, this can suppress a seizure. More detail regarding applying ultrasound to the source of the neural activity in the brain will be provided with reference toFIG.4. Detection of Neural Activity with Acoustic Waves Swelling of a single nerve fiber associated with an action potential can have a displacement of about 5 nanometers (nm) to 10 nm and a swelling pressure about half a Pascal (Pa). The frequency of the generated displacement centers around a few kilohertz (KHz). A seizure is expected to result from multiple action potential firings. Thus, a seizure should have a larger displacement from a larger source and generate more pressure than for a single action potential firing. An acoustic wave from a seizure can be detected using acoustic transducers, accelerometers, optical sound sensors, and/or any other suitable sensors. These sensors can provide a non-electrical method of detecting neural activity in the brain. Examples of acoustic transducers include piezoelectric transducers, capacitive micromachined ultrasonic transducers (CMUTs), electromagnetic acoustic transducers (EMATs), and the like. FIG.2is a flow diagram of a method20of detecting neural activity in the brain according to an embodiment. The method20can be implemented by block12of the method10ofFIG.1, for example. The method20includes sensing pressure waves associated with neural activity of a brain at block22. Sensors positioned outside of a skull encasing the brain can detect the pressure waves. Accordingly, neural activity in the brain can be detected in a non-electrical and non-magnetic manner. The sensors can be positioned around the skull and arranged to take measurements at a plurality of locations.FIGS.5and6illustrate examples of such sensors. The sensors can be integrated with a helmet or cap, for example. In some instances, the sensors can be implanted between a skull and a scalp. The method20includes detecting a seizure based on outputs of the sensors at block24. The outputs of the sensors can be processed. A seizure can be detected in response to sensed pressure waves indicating a seizure. For instance, the sensed pressure waves can match a pattern associated with a seizure. Alternatively or additionally, the sensed pressure waves can have a magnitude that satisfies a threshold pressure indicative of seizure. The outputs of the sensors can be processed and the seizure can be detected using a processor that is in communication with the sensors. Localization of Neural Activity The coupling into the skull can provide a unique texture into the acoustic waves. This can enable localizations of the source of a seizure to a few millimeters, even though the wavelength of the pressure wave is in the meter range. Moreover, each skull can have a different size, shape, etc. An acoustic wave can include compressional waves and/or shear waves. Compressional and/or shear waves can be utilized for localization. For instance, both compressional and shear waves can be used for localization. The feasibility of using both compressional and shear waves has been investigated via numerical simulations. At around 1 kHz, the compressional wave wavelength in the brain is on order of a meter. Hence, the detection is done in the near-field. However, the skull provides richness and unique texture to the detected data, making it possible to utilize machine learning techniques in localizing the event. For localization using compressional waves, a machine learning technique in conjunction with existing EEG localization techniques can be used. For localization using shear waves, since the wavelength in the brain at 1 kHz is much smaller, on the order of a few millimeters, detection is done in the far-field. Accordingly, beam-forming techniques can be used. Localization of an epileptic event can be an inverse source problem where the locations of the events are inferred from recorded waveforms. Localization can include obtaining high resolution CT or MR images of a person. These images can then be processed to prepare the input to a computational model. The processing can include (a) segmentation and (b) registration and transformation. Segmentation can identify and separate each tissue type in the brain (such as skull, gray matter, white matter, skin, cerebral fluid, etc.). This can associate proper acoustic properties and boundary conditions in the model. Registration and transformation can properly mesh and map the patient-specific head geometry to the locations of the computational sources and receivers. This in-silico phantom can be fed into a forward/backward acoustic wave solver. The computational model can include finite elements, finite volumes, finite differences, boundary elements, spectral methods, the like, or any suitable combination thereof. The localization algorithm can be built upon this computational model. The localization algorithms can include compressional wave algorithms and/or shear wave algorithms. With machine learning methods, a set of prior measurements at different sources with known locations can be used to estimate the location of an unknown source in any posterior measurement. The first step is called training and the second step is called estimation/localization. The training data are obtained using a person-specific computational model. The simulated waveforms at each receiver form a training set. Theses waveforms can be stacked together as columns of a matrix M. Let d(t) be an actual measurement. We can expand it as a linear combination of the bases (i.e., the training measurements). That is to write d(t)=Σi=1Nθidi(t)+ϵ=Θ+ϵ, Θ∈N, Θ=θ1, . . . ,θN†, where di(t)'s are the training (computational) measurements and ϵ is the error of the projection. We attempt to minimize the error in the localization step through, for example, least squares minimization, which can be represented by: minΘ∈ℝN⁢12⁢∑r⁢μr⁢ℳr⁢Θ-dr,sl2⁡([0,T])2. In this expression, ∥r's are weighting parameters. Mr and dr are the data matrix and the measured signal at the rth receiver. θi's are the projection coefficients, which give an estimate of the likelihood of finding the actual source at location i. Possible variations include penalizing this optimization problem using a non-negativity constraint and/or sparse-promoting constraint (such as 11or 10penalty terms) to achieve a sparse and positive estimation of the projection coefficients. An example compressional wave algorithm will now be discussed. The example compressional wave algorithm includes (a) machine learning for localization and (b) localization by comparing computational signals with measured signals. Since the computational model and training steps can both be performed once and/or prior to occurrence of an epileptic event, the localization step has a low computational burden/complexity and can be implemented on the order of a few milliseconds. Since near field techniques can be used for the pressure waves, a learning method can be used for localization. An example process involves running a person-specific forward computational model for as many sources as adequate, where the sources are separated by the desired resolution (e.g., a millimeter) and cover a volume of interest. Localization by comparing the computational signals with the measured signals can be performed, for example, using one or more of the following methods: custom projection-based learning, deep and convolutional neural network models, or other machine learning techniques including classifiers (such as Support Vector Machine (SVM), softmax regression, generalized linear models, etc.) and regression techniques. In custom projection-based learning methods, the computational data are stacked together to form several data spaces for each receiver. The algorithm can attempt to find a projection of the measured signal onto the data spaces. The bases of the space with the maximum shadow (projection) of the measured signal can be identified as the source. In deep and convolutional neural network model methods, a deep or convolutional neural network is trained based on the computational sources and simulated waveforms. The measured signal in presence of an epileptic event can then be fed into the already trained neural network model to predict the location of the source. Example shear wave algorithms will now be discussed. Since the wavelength for the shear waves is on an order of a millimeter, one or more of a variety of source imaging/localization techniques can be used. Examples of such techniques include (1) tomographic reconstruction, (2) adjoint-state localization, (3) beam-forming (delay-sum and/or its variants), (4) sparse array imaging, (5) time-reversal techniques (including DORT and MUSIC), and (6) correlation-based methods. FIG.3is a flow diagram of a method30of localizing a source of neural activity in the brain according to an embodiment. Any suitable part of the method30can be implemented at block14of the method10ofFIG.1. Any suitable part of the method30can be performed using any suitable processor. Localization can be an inverse problem. Parameter functions can be non-linearly mapped to data. Accordingly, localizing can involve estimating the non-linear mapping. Machine learning techniques can be applied. The brain and skull can be treated as a black box and machine learning can be implemented experimentally. The method30includes modeling a brain and skull at block32. The shape and size of a skull can be determined, for example, using a CT scan. For various event locations, pressure waves at points on the skull can be calculated. The model can be developed, for example, by detecting pressure waves associated with neural events using sensors and comparing the measurements by the model to where the source of the event is determined to be using another method, such as a CT scan. Then the model can be refined. Pressure waves associated with neural activity in the brain, such as a seizure, can be detected at block34. This operation can correspond to block12of the method10ofFIG.1in certain instances. The data associated with the detected pressure waves can be compared with the model at block36. Then the source of the neural activity can be determined at block38. The source can be determined resolution on the order of millimeters. Suppression of Neural Activity with Ultrasound Once the seizure is localized with millimeter resolution, ultrasonic transducers at relatively high frequencies (e.g., in range from 0.5 MHz to 5 MHz) can be used suppress the action potential firings. This can blunt the seizure. Ultrasound energy has been shown to have reversible inhibitory effects, through macroscopic temperature elevation in the brain. The relatively high frequency ultrasonic transducers can be integrated into the same helmet as the acoustic transducers or other sensors for detecting the seizure. Any suitable technology can be used for ultrasonic transducers for suppressing a seizure. For instance, piezoelectric ultrasonic transducers and/or capacitive micromachined ultrasonic transducers (CMUTs) can be used for suppressing neural activity in the brain. Ultrasound energy can be delivered to the brain using techniques for transcranial ultrasound delivery in certain instances. An array of ultrasonic wedge transducers arranged to efficiently deliver focused ultrasound energy into the brain with relatively minimal heating of the skull can be used. The ultrasonic wedge transducers can have a treatment envelope of an entire brain or substantially the entire brain. The ultrasound wedge transducers can generate Lamb waves that then mode convert into longitudinal waves in the brain. Alternatively or additionally, a 2-dimensional array of ultrasonic transducers can generate Lamb waves by applying signals with the appropriate phases and at a proper frequency to favor the generation of a certain mode of Lamb waves for transcranial ultrasound delivery. Any other suitable technique of delivering ultrasound can be implemented for applying ultrasound energy to a source of neural activity in the brain. Such techniques can include, for example, normal incidence techniques. Suppression of neural activity using ultrasound energy can be performed in response to detecting neural activity and localizing a location of the neural activity. Ultrasound energy can be applied to the location of the neural activity within a millisecond time frame of swelling of nerve fibers in the brain. Moreover, the ultrasound energy can be applied to a location that is determined with resolution on the order of a millimeter. As such, ultrasound suppression of neural activity disclosed herein can be a dynamic treatment applied in a specific location in response to an event. FIG.4is a flow diagram of a method40of suppressing neural activity in the brain according to an embodiment. The method40can be implemented by block16of the method10ofFIG.1, for example. The method40includes receiving information identifying a location of neural activity in the brain at block42. This information can be received in response to detection and/or localization of the neural activity. Ultrasound energy can be applied to the location in the brain at block44. The ultrasound energy can have a frequency in a range from 0.5 MHz to 5 MHz, for example. In cases where a skull is relatively thin, the ultrasound energy can be in a higher part of the range (e.g., 1 MHz to 5 MHz). This can achieve advantages in focusing and/or suppression. As an example, a child can have a relatively thinner skull than an adult. In some instances, the ultrasound energy can have a frequency in a range from 0.5 MHz to MHz. The ultrasound energy can be applied using transcranial ultrasound delivery techniques that result in a relatively small amount of heating of the skull. Example Systems for Detection, Localization, and Suppression of Seizures The forward problem has been modeled, showing through numerical analysis that pressure waves with varying characteristics are present at different locations over a skull. This difference in the pressure waves at various locations confirms that it is possible to localize an event even with such a large wavelength. Various types of sensors can be used to detect seizures. A swim cap-like helmet can be used in which a number of acoustic transducers and/or other suitable sensors, such as accelerometers, are included for detection and suppression of epileptic seizures. Example systems that can be used to perform any suitable features of detection, localization, and/or detection of seizures disclosed herein will now be discussed. The detection, localization, and suppression of neural activity can be performed in milliseconds so that adverse effects of neural activity in the brain are mitigated and/or not realized. FIG.5illustrates an example system50for detecting and suppressing epileptic seizures according to an embodiment. As illustrated, the system50is positioned relative to a human head. The system50can be used to perform any suitable operations of any of the method disclosed herein. The system50includes a helmet52and integrated acoustic transducer arrays54. The helmet52can be any suitable helmet. The helmet52can be soft or hard. In certain instances, the helmet52can be similar to an EEG cap. A headset with a plurality of sensors and/or acoustic transducers can alternatively be implemented. The acoustic transducers54can be used to detect an epileptic event, localize a location56of the epileptic event, and to apply focused ultrasound to the location56of the epileptic event. A schematic of an example 2-dimensional array of acoustic transducers54is also shown. A plurality of these arrays of acoustic transducers54can be integrated with the helmet52to provide a plurality of locations from which to generate data associated with pressure waves and/or from which to apply ultrasound to the location56. The acoustic transducers54are in communication with a processor. The processor can process signals from the acoustic transducers54. Accordingly, the processor can be used to detect and/or localize a seizure. The processor can provide inputs to cause the acoustic transducers54to apply focused ultrasound. The processor can be integrated with the helmet52and/or external to the helmet52. The processor can be in communication with the acoustic transducers54by wired connections and/or wirelessly. FIG.6illustrates an example system60for detecting and suppressing epileptic seizures according to an embodiment. The system60can be used to perform any suitable operations of any of the method disclosed herein. The system60includes a helmet52, integrated sensors62, integrated ultrasonic transducers64, and a processor65. In the system60, the sensors62used to detect and/or localize an epileptic event are separate from the ultrasonic transducers64arranged to apply focused ultrasound to suppress the epileptic event. The sensors62and ultrasonic transducers64can operate at different frequencies. Accordingly, a separate implementation can allow both the sensors62and the ultrasonic transducers64to be configured for operating at respective desired frequencies. The sensors62can provide data associated with pressure waves at a plurality of locations on a skull. The sensors62can include acoustic transducers, accelerometers, other suitable pressure sensors, or any suitable combination thereof. As one example, the sensors62can include acoustic transducers with a resonant frequency in the kHz range. The ultrasonic transducers64can apply focused ultrasound from a plurality of locations around the skull. The ultrasonic transducers64can be configured for transcranial ultrasound delivery. For instance, the ultrasonic transducers64can include an array of ultrasonic wedge transducers. The ultrasonic transducers64can apply focused ultrasound energy having a frequency on the order of 100 s of kHz. The sensors62and the ultrasonic transducers64are in communication with a processor65. For example, the sensors62and the ultrasonic transducers64can be electrically connected to the processor65via wired connections as illustrated. The processor65can perform any suitable processing on the output of the sensors62to detect and/or localize the epileptic event. For instance, the processor65can be used to implement any suitable operations of detecting and/or localizing neural activity disclosed herein. The processor65can include any suitable circuitry arranged to perform such processing. The processor65can control the ultrasonic transducers64to apply focused ultrasound to a location of an epileptic event in response to detecting and localizing the epileptic event. The processor65and the ultrasonic transducers64can be used to implement any suitable operations related to applying focused ultrasound disclosed herein. Although the processor65is integrated with the helmet52inFIG.6, the processor65can be partly or fully separate from the helmet52in some instances. Although the systems ofFIGS.5and6include helmets, any suitable principles and advantages disclosed herein can be implemented without a helmet. For instance, sensors62and/or acoustic transducers54and/or ultrasonic transducers64can be implanted between the scalp and the skull and perform similar or the same functionalities. Modeling and Simulation The skull and brain can be modeled for a specific person. For instance, skulls for young people can be relatively thinner than skulls for older people. Thinner skulls can be amenable to relatively higher frequency ultrasound than thicker skulls. Accordingly, the frequency of ultrasound used for suppressing neural activity can be adjusted for a particular skull. Moreover, one or more other parameters can alternatively or additionally be adjusted as suitable based on characteristics of a particular skull. The model can treat the skull as a circularly symmetric model, a 3-dimensional spherical model, or a full 3-dimension model of a particular human head with anatomical features. A circularly symmetric configuration was first investigated due to lower computation burden than a 3-dimensional model. How much texture/feature the presence of the skull can add to measurements at different points on the skull was investigated. The order of displacements that can be detected and what kind of sensitivity and bandwidth for detection of a seizure were studied. The feasibility of detecting shear waves as well as compressional waves has also investigated. A 3-dimensional model with a hemispherical skull was constructed. The models were utilized for developing localization algorithms. FIG.7Ais a representation of a portion of a circular skull with a radius of 10 cm.FIG.7Billustrates a representation of an acoustic wave with an input normal surface velocity of 1 m/s with a width of 2 mm and a height of 10 mm.FIG.7Cshows an input velocity profile used to simulate pressure generated along a central axis of the skull. The results on the y-axis inFIG.7Care normalized. The simulation is linear. Thus, the results can be scaled to a desired/actual velocity to estimate an output. The resulting pressure wave at a plurality of points on the skull can then be measured. In the simulations, the skull bone was modeled an elastic solid that supports shear waves. FIGS.8A and8Billustrate 10 points on the skull where the pressure wave can be measured for different locations of a source of neural activity. While 10 points are used in example simulations, any suitable number of measurement points can be used. The source of neural activity simulates nerve fibers swelling. The measurement points can be evenly distributed as illustrated. The source of neural activity is located at different points along the central axis inFIGS.8A and8B. InFIG.8A, the source is located at a first location that is 2 mm from the center point along with central axis. InFIG.8B, the source is located at a second location that is 82 mm from the center point along the central axis. The skull around the brain tissue impacts the pressure wave measurements at the measurement points on the skull. FIGS.9A and9Bare graphs of radial displacement normal to the surface of the skull based on compressional waves. Graphs for the 10 points on the skull fromFIGS.8A and8Bare shown. These simulations assume that the brain issue does not support shear waves.FIG.9Aillustrates pressure sensor signals for the source located at 2 mm from the center point along the center axis as shown inFIG.8A.FIG.9Billustrates pressure sensor signals for the source located at 82 mm from the center point along the center axis as shown inFIG.8B. The location of the source can be identified by training a model to identify where the location of the source is based on corresponding sensor outputs at the measurement points on the skull. FIG.10Ais a graph of radial displacement normal to the surface of the skull based on shear wave simulation. Using shear waves is a different localization technique than using compressional waves.FIGS.10B and10Care graphs that zoom in on different portions of the graph ofFIG.10A. These graphs correspond to a source of neural activity located at 2 mm from the center point along the center axis as shown inFIG.8Aand each curve is for a different one of the 10 measurements points shown inFIG.8A. At time equals about 20 milliseconds, a new wave packet is generated. A model can be constructed based on back propagation beamforming. The location of the source can be identified by training a model to identify where the location of the source is based on corresponding sensor outputs at the measurement points on the skull. FIG.11Ais a graph of radial displacement normal to the surface of the skull based on shear wave simulation where the source is at a different location than forFIG.10A.FIGS.11B and11Care graphs that zoom in on different portions of the graph ofFIG.11A. These graphs correspond to a source of neural activity located at 82 mm from the center point along the center axis as shown inFIG.8Band each curve is for a different one of the 10 measurements points shown inFIG.8B. A model can be constructed based on data corresponding to sources at a variety of locations within the brain. FIG.12Ais an illustration corresponding to a 3-dimensional model of a hemispherical skull. There are 16 distributed measurement points on the illustrated hemispherical skull.FIG.12Bshows two source points associated with the hemispherical skull, in which a first point is at the origin and a second point is at 45°. An input normal surface velocity of 1 m/s with a radius of 1 mm was simulated at each source point. FIG.13Aillustrates normal displacements over time for various measurement points associated with the first source point ofFIG.12B.FIG.13Billustrates normal displacements over time for various measurement points associated with the second source point ofFIG.12B. The location of the source can be identified by training a model to identify where the location of the source is based on corresponding sensor outputs at the measurement points on the skull. Accordingly, compressional waves and/or shear waves can be used for detection and localization of an epileptic event. Displacement data on the outer surface of the skull is normalized in the simulations herein. The model is linear and thus can be scaled based on the order of displacement of an epileptic firing. The simulations herein illustrate that the skull and its complicated nature provides enough features to distinguish signals at different locations and thus it provides the capability of localizing an epileptic event in the near-field. The resolution of localization can be bounded by the resolution of the training points used to develop a model. In other words, the separation between sources of neural activity used in a training step of a localization algorithm can determine the resolution of reconstruction for the localization algorithm. FIGS.14A and14Billustrate results from localization simulations. In the simulations corresponding toFIGS.14A and14B, the model is assumed to be noise free. InFIGS.14A and14B, a horizontal dashed line shows the 3 decibel (dB) point, which was used to estimate resolution. The curve with circle data points shows the location of an unknown source. The dashed curve with x data points shows what a localization algorithm estimates as the likelihood of finding the unknown source are various locations at points used to train a model. The best case scenario for localization can be when the algorithm tries to localize a source at a location that has been used in training the model.FIG.14Acorresponds to localizing a source at a location used in training. In the simulations corresponding toFIGS.14A and14B, a model was trained by sources evenly distributed 2 mm apart.FIG.14Aindicates a resolution of about 1.16 mm. This can bound resolution for localizing with the training set for this simulation. For a better resolution, the model can be trained with a smaller source separation. FIG.14Bcorresponds to simulation results for localizing a source located at a point in between two locations of the training set. The localization algorithm can interpolate the location of the source in between the two points in the training set.FIG.14Bcorresponds to a lower resolution thanFIG.14A.FIG.14Bindicates a resolution of about 3.16 mm. Applications and Conclusion The disclosed innovations are not specific to a particular application or technology for implementation. Other applications include the use of the transducers to detect the character of normal brain mechanical impulses in daily life conditions. The premise is that different emotions or states of mind would result in different characteristics of the mechanical brain waves. One would then be able to localize the origin of certain feelings like anger, to certain locations in the brain, then suppress such feelings before they result in action. This can then be extended to enhancing good feelings in the same fashion. Because every person's skull can be different, a CT scan can be done on each individual in order to tailor make an algorithm for detection and suppression of neural activity. Other imaging modalities are envisaged to provide the skull shape and outline for algorithm development. Some of the embodiments described above have provided examples in connection with epileptic seizures in the brain. However, the principles and advantages of the embodiments can be used for any other suitable devices, systems, apparatuses, and/or methods that could benefit from such principles and advantages. The various features and processes described herein may be implemented independently of one another, or may be combined in various ways. All possible combinations and sub combinations are intended to fall within the scope of this disclosure. In addition, certain methods or process blocks may be omitted in some implementations. The methods and processes disclosed herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in any other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state. The example blocks or states may be performed in serial, in parallel, or in some other manner as appropriate. Blocks or states may be added to or removed from the disclosed example embodiments as suitable. The example systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed example embodiments. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the disclosure. Indeed, the novel devices, systems, apparatus, methods, and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the disclosure. For example, while blocks are presented in a given arrangement, alternative embodiments may perform similar functionalities with different components and/or circuit topologies, and some blocks may be deleted, moved, added, subdivided, combined, and/or modified. Each of these blocks may be implemented in a variety of different ways. Any suitable combination of the elements and acts of the various embodiments described above can be combined to provide further embodiments.
43,337
11857812
DETAILED DESCRIPTION Throughout the following description, specific details are set forth in order to provide a more thorough understanding of the invention. However, the invention may be practiced without these particulars. In other instances, well known elements have not been shown or described in detail to avoid unnecessarily obscuring the invention. Accordingly, the specification and drawings are to be regarded in an illustrative, rather than a restrictive sense. Some aspects of this invention provide systems and methods which use ultrasound to produce one or more ultrasound images of a part of a patient's head. These one or more images may be registered with previously acquired images by identifying one or more structures that are present in both the ultrasound images and the previously acquired images (“common structures”). The registered images may then be used locate one or more target regions relative to transducers used to acquire the ultrasound image(s). The target regions may have known locations relative to the common structures. Ultrasound may then be delivered to the target region(s) to open the blood-brain barrier to allow drugs to enter brain tissue. Delivery of the ultrasound to the target region(s) may be coordinated with injection of one or more drugs into the patient's circulatory system. The ultrasound energy may be focused onto the target regions such that the blood brain barrier is opened selectively in the target regions. Methods and apparatus as described herein may optionally and beneficially apply two or more ultrasound transducers. One or more of the transducers is operable for acquiring images of structures in the patient's head including parts of the brain (“imaging transducers”). One or more of the transducers is operable to deliver ultrasonic energy to promote the selective opening of the blood-brain barrier in target regions (“treatment transducers”). Imaging the brain with ultrasound or transmitting ultrasonic energy into the brain is a challenge because the skull attenuates ultrasound energy. Certain embodiments of the present invention exploit the fact that the skull has some areas where the attenuation of ultrasound is lower than in other areas of the skull. These areas include, but are not limited to, the temples of the head near the ears and behind the eyes and in the back of the head. The skull in these areas tends to be thinner compared to the rest of the skull. Thus, ultrasonic energy can pass more easily through these areas compared to other areas of the skull. The areas where the attenuation of ultrasound is lower compared to the rest of the skull are referred to herein as “low attenuation acoustic windows”. Ultrasound energy may be transmitted into the brain via low attenuation acoustic windows and/or echo signals may be received from structures within the brain via low attenuation acoustic windows more easily than via other areas of the skull. Ultrasound imaging relies on the transmission of ultrasound energy into the patient's body, and subsequently, detecting the energy that is reflected by internal tissue. Regions in the head that can be imaged by ultrasound are limited, as there are few low attenuation acoustic windows in the skull. Even through these low attenuation acoustic windows, the structures that can be effectively imaged are limited. Diagnostic imaging is often conducted when diagnosing a patient for brain tumors. As such, brain images acquired using other modalities, such as MRI or computed tomography (CT), are available for many patients. MRI and CT do not suffer from the same penetration issues through the skull as ultrasound imaging. MRI scans of the brain are therefore able to image the entire brain, including the same structure(s) that may be imaged with an ultrasound transducer through a low attenuation acoustic window. Various implementations of the present invention apply the realization that the same structure(s) may be imaged with two different imaging modalities. Certain structures may be imaged by ultrasound imaging performed through a low attenuation acoustic window such as one or both temples. These structures may include, for example, structures of the brain such as the circle of Willis, ventricles, and the corpus callosum and/or other structures in known positions relative to the patient's brain (e.g. dental implants, surgical screws, orthopaedic hardware affixed to the patient's skull or the like). All or parts of these same structures may be visible in a pre-operation MRI or CT scan. In some embodiments of the invention, ultrasound images obtained by one or more imaging ultrasound transducers via low attenuation acoustic windows are “registered” to MRI or CT images obtained of the same patient. Registration may be performed by processing the images of common structure(s) imaged in both imaging modalities. Regions where ultrasound energy is to be delivered to facilitate treatment (“target regions”) generally have known locations in pre-op images (e.g. MRI and/or CT scans). The location of a target region may be found in a coordinate system of the presently obtained ultrasound image through the process of registration. It is not necessary for the target region(s) to be in the field of view of the ultrasound image. Ultrasound energy may then be delivered to the target region(s) using one or more treatment ultrasound transducers or transducer elements that have known position(s) and orientation(s) relative to the ultrasound image. The known position(s) and orientation(s) of the treatment transducer(s) and the imaging transducer(s) may be maintained by any one or more of: mounting the treatment transducers and imaging transducers used to obtain the ultrasound image to a common fixed support structure; mounting the treatment transducers and the imaging transducers to a support structure (e.g. an articulated arm or other manipulator) having one or more joints that are movable and tracking positions of the joint(s); manually positioning the treatment transducers and imaging transducers by an operator; and using a position tracking system (e.g. an electromagnetic position tracker) to monitor relative positions and orientations of the treatment transducer(s) and the imaging transducer(s). Example implementations which exploit these approaches are described herein. FIG.2shows an example system200that may be used for imaging at least a portion of a patient P's brain and subsequently delivering ultrasound to selectively open patient P's blood-brain barrier. System200does not require use of an MRI scanner during treatment (the “intra-op period”). System200includes an ultrasound system210. Ultrasound system210is coupled to ultrasound transducer assembly220via transducer cable225. Ultrasound transducer assembly220is shown to be in the shape of a helmet having a concave opening dimensioned to receive at least the top part of a patient's head. Other configurations are not excluded. Ultrasound transducer assembly220may comprise one or more imaging transducers and/or treatment transducer elements. The illustrated system200includes an electronically controlled intravenous (IV) drug delivery system230. In this non-limiting example, IV system230is shown to include one IV bag235and an electronic valve or valve system240. In other embodiments, IV system230may comprise multiple bags235. Electronic valve240may be used to control the rate of flow of the contents of the one or more bags in IV system230. Electronic valve240may be controlled by ultrasound system210which may send control signals via control line245. Signal interconnect module250may send and receive signals from external and peripheral devices, such as electronic valve240, and communicate with a peripherals and I/O module within ultrasound system210as will be discussed later. Signal interconnect module250may also include the ability to connect and disconnect cables that connect to external devices and peripherals. Overview FIG.3is a flowchart showing a non-limiting example method300that may be performed to determine the position(s) and orientation(s) for one or more treatment transducers that may be configured to deliver ultrasound in conjunction with treating a patient. At step305, pre-op CT or MRI images are imported into a system200. System200may be applied for imaging at least a portion of a patient brain and subsequently delivering ultrasound to selectively open the patient's blood-brain barrier. In the illustrated embodiment ultrasound system210includes a controller that provides overall control over system200and the pre-op images are provided in or imported into a data store accessible by ultrasound system210. Step310performs ultrasound imaging to obtain images of certain target structures (e.g. circle of Willis). The imaging may be performed in real-time and monitored by a human operator. This may beneficially be done by placing imaging transducer(s) to obtain the ultrasound images through low attenuation acoustic windows in the patient's skull. Step310also identifies common structures that may be used for registration. Step320registers the ultrasound image and the pre-op image by comparing the position and orientation of the common structure(s) in the ultrasound image to the position and orientation of the same common structure visible in the pre-op images. The registration yields a transformation by which coordinates of points in the pre-op images may be transformed to yield coordinates of the same points in a frame of reference of the ultrasound image or vice versa. At step325, target regions where ultrasound is to be delivered are identified. Target regions may be selected in the pre-op image. Step325may comprise, for example, identifying a tumor or other diseased area requiring treatment, or an area of the blood-brain barrier to be opened. This selection may occur at any time after the pre-op image(s) are obtained. In some cases one or more target regions are identified outside of system200, for example using treatment planning software. In such cases data identifying the target region(s) may be imported into system200in step305. The current desired coordinates to which ultrasound energy should be delivered by one or more treatment transducers will be known once registration has occurred and the target region is selected. The position(s) and orientation(s) of the one or more treatment transducers may then be calculated at step330. Step330may comprise, for example, determining desired coordinates at which one or more treatment transducers should be placed and/or selection from among a plurality of transducers or transducer elements of transducers and/or transducer elements to be used to deliver ultrasound energy to a specific target region. Method300may be performed, for example, with any of the ultrasound system configurations discussed below. Ultrasound System Apparatus FIG.4shows an example ultrasound transducer assembly220which may be placed over patient P's head. Ultrasound transducer assembly220includes a support structure405that holds and positions plural transducer elements in desired locations on the patient's head. The portion of patient P's head410that is behind ultrasound transducer assembly220is shown in bold dashed lines. Ultrasound transducer assembly220comprises transducer elements415. Optionally, elements415include imaging elements415A that are adapted for imaging, and treatment elements415B that are adapted for delivering ultrasound to facilitate treatment. Groups of elements415, or subsets, may be configured to perform a common function such as imaging the brain or delivering ultrasound to facilitate treatment (e.g. by opening the blood-brain barrier). InFIG.4, some imaging elements415A are included in each of subsets420A and420B. Some treatment elements415B are included in each of subsets425A and4256. Elements415A and415B may differ from one another in various ways including one or more of:location (e.g. imaging elements415A may be clustered or concentrated near one or more low attenuation acoustic windows while elements4156may be more widely distributed—in cases where the approximate location of one or more target regions is known in advance transducer assembly220may optionally be customized by concentrating elements415B in areas suitable for delivering ultrasound energy to the target region(s));connection to receiving circuits (e.g. imaging elements415A are connected to receiving circuits which may detect ultrasound echoes while treatment elements415B are optionally not connected to receive circuits);connection to different transmitting circuits (e.g. treatment elements and imaging elements may be driven by differently designed driving circuits. The treatment elements may, for example, be driven by higher-power driving circuits optimized to operate at lower frequencies than the imaging elements);power (e.g. treatment elements415B may be constructed to generate higher power ultrasound than imaging elements415A);optimum operating frequency (e.g. treatment elements415B may operate most efficiently at lower frequencies than imaging elements415A);size (e.g. treatment elements415B may be larger and/or more widely spaced apart than imaging elements415A);configuration (e.g. treatment elements415B may include acoustic lenses that focus at different depth(s) than imaging elements415A). Imaging elements415A may be located at positions in transducer assembly220that are adjacent to low attenuation acoustic windows when ultrasound transducer assembly220is worn by patient P. This is illustrated by subsets420A and420B inFIG.4being located at the temples. Subsets of transducer elements that are configured to produce images may generally be referred to as “imaging subsets”. Subsets of transducer elements that are used to deliver ultrasound energy to facilitate treatment do not have to be situated near a low attenuation acoustic window. As such, “treatment subsets” may be selected to include those treatment elements at locations from which it is optimal to deliver ultrasound to a target region. Treatment subsets will often be at different locations from imaging subsets. This is illustrated by subsets425A and425B. In some embodiments transducer220includes a large number of treatment elements located at a wide range of positions from which ultrasound energy may be delivered to a wide range of target regions. From this large number of treatment elements a subset may be chosen to deliver ultrasound energy to specific target region(s). Subsets of transducer elements include enough elements to accomplish their task, be it imaging or delivering ultrasound energy that facilitates treatment. The organization of the elements within a subset may also be a factor for effective operation. InFIG.4, subsets420A,420B,425A, and425B are pictured to be circular, with the individual elements organized in a 2D array. However, element groupings within a subset need not be circular in nature. They may be of any appropriate shape such as a curved-linear format. In an example embodiment, the preferred size of a subset is in the range of 2 to 3 cm in diameter. Individual transducer elements415may have a range of shapes. Each element may have a circular cross section, although other shapes such as rectangular shapes are not excluded. Different subsets may differ from one another in various ways including, the number of transducer elements included in the subset, the shape and size of the area over which the included transducer elements are distributed and the way in which the transducer elements are operated to perform a desired function (e.g. imaging or delivering ultrasound to facilitate treatment). Ultrasound Control Subsystems FIG.5illustrates the operation of a control subsystem of an ultrasound system210operable to image and deliver treatment to a patient's brain. Control subsystems as described herein may be applied to ultrasound systems having other configurations and/or supplied as stand-alone components. Block505comprises a data store that may contain images from other modalities (e.g. MRI or CT scans), as well as other data. Block505is in communication with control and computation block510, which may include one or more modules. The modules within control and computation block510may be employed either individually, or in any combination or sub-combination with each other. Module510A generates control signals that affect operation of transducer elements that will be used for a transmit operation (i.e. where an element sends ultrasonic energy into the brain). For example, block510A may generate control signals that determine one or more of:what transducer elements415or subset comprising transducer elements415will be used for a transmit operation;what waveform(s) will be transmitted by transducer elements415;what transmit delays will be applied to individual transducer elements415;at what amplitude(s) will individual transducer elements415be driven;what transmit apodization function will be applied to transducer elements415;at what time(s) will transducer elements415be operated to transmit ultrasound;at what frequency(ies) will transducer elements415be driven;etc. Module510A may include or have access to a data structure that indicates the locations of transducer elements415. This data structure may be used in determination of what transducer elements415use for a particular transmit operation and/or to calculate transmit delays, for example. Module5106generates control signals that affect operation of transducer elements that will be used for a receive operation (i.e. where an element receives echo signals from structures within the brain). Control signals from module5106may determine, for example, one or more of:what transducer elements415are used to receive;beamforming parameters;receive apodization function;receive gain;receive depth;image processing to be applied;etc. Storage module510C includes a data store that may be used to store various information including, but not limited to, digitized radio-frequency (RF) data received at the elements that are configured to receive, images acquired from other modalities and transferred to ultrasound system210, and intermediate or final results of computations performed within control and computation block510. Module510D may perform computations related to image formation and image processing. Module510D may apply any suitable technology for ultrasound image formation. In a non-limiting example of a computation related to image formation, RF data received from some or all imaging elements415A may be summed in module510D based on receive delays computed by module510B. Images of the anatomy may be formed based on these sums. In a non-limiting example of a computation related to image processing, after the images are formed, they may be processed in various ways including, but not limited to, filtering, log compression, mapping to post-processing maps, etc. Module510E may perform various computations such as, but not limited to, computations with respect to registration of intra-op ultrasound images to images from other imaging modalities. The results of these computations may be applied to select and/or position transducers operable to transmit the appropriate energy. Module510F may generate and provide control signals for various processes such as, but not limited to, real-time imaging, and the coordination of timing of an intravenous injection of drug or other compound into a patient with the timing of transmission of ultrasound energy to a patient. Peripheral controls and I/O module510G may generate control signals that may be sent to external devices and peripherals that may be used in conjunction with ultrasound system210. These devices and peripherals may include without limitation, an intravenous drug delivery system, transducer positioning systems, transducer position detecting systems, etc. I/O module510G may also accept inputs from external devices and peripherals and provide them to other modules within control and computation block510. Control and Computation block510may also interface to user interface module515. User interface module515may allow the use of one or more user interface devices such as, but not limited to, a keyboard, mouse, touch screen, trackball, touch pad, gesture-based interface, voice command interface, discrete switches or controls, and a display520. Through the user interface, authorized users may operate ultrasound system210. These operations may include the ability to choose the target region and to choose the subsets of elements4156to be used to generate ultrasound for treatment (if a manual control option is selected). Other operations may also be possible. In some embodiments selection of the target region is done by allowing a user to navigate a 3D rendered image using one or more of the user interface devices. Display520may be used to display various information including, but not limited to, the obtained ultrasound images, images from other modalities, merged images, patient information, and instructions or options for an authorized user. For example, after having determined the subset(s) of transducer elements415that are to transmit ultrasonic energy, parameters may be set for a transmit operation. These parameters may include, but are not limited to, length of delays and transmission frequency. These parameters may be selected manually by the operator, or automatically based on a set of parameters, as discussed elsewhere herein. These parameters may be used and applied to various modules of ultrasound system210. During a transmit operation, system controls module510F may send the appropriate control signals based on the parameters to transmit (TX) Amplifier430. TX Amplifier430may then apply the appropriate signal to elements of ultrasound transducer assembly220either directly through multiplexer (MUX)435or through transmit/receive (TX/RX) switch440and then through MUX435. InFIG.5, TX amplifier430is shown to be coupled to MUX435by dotted line525and coupled to TX/RX switch by dashed line530. This configuration enables the ultrasound system to drive transmit only elements in addition to the elements that are connected to both transmit and receive. The operation of different element types in this manner provides for certain advantages that are explained elsewhere herein. In contrast, in conventional ultrasound imaging systems, a TX amplifier is typically only coupled to a TX/RX switch (with subsequent connections to the elements possibly through a MUX) and only support elements that can both transmit and receive. Thus, as dotted line525illustrates, TX amplifier430couples to MUX435which then couples via connection225C to an example of a transmit only element415B. Simultaneously, TX amplifier430may couple to TX/RX switch440, which then couples through MUX435to connect through connection225A to an example transmit and receive element415A. Connection525from TX amplifier430to MUX435as shown inFIG.5is not present in conventional ultrasound imaging systems. TX/RX switch440may serve to protect electronics in the receive path from relatively high voltages that may be present in the transmit path. Protection for elements that transmit and receive may be required as the electronics that transmit and the electronics that receive are electrically connected to the same physical transducer element. In the example embodiment shown inFIG.5, the TX/RX switch440is not needed for transmit only element4156as element4156is not used for the receive operation. In some embodiments, MUX435may be provided where the number of elements in ultrasound transducer assembly220is larger than the number of electronic channels within ultrasound system210. With MUX435, various subsets of elements in ultrasound transducer assembly220may be operated with the appropriate parameters even where there are fewer electronic channels than there are elements. It is anticipated that in practice, when utilizing these system and methods, that ultrasound transducer assembly220would have more elements of each kind (e.g. imaging elements415A and treatment elements415B) than channels capable of operating each kind. For a receive operation, MUX435connects the elements that transmit and receive to the receive side electronics. The receive signal path is shown by the arrows going right starting from example transmit and receive element415A. In this example embodiment, the signal passes through MUX435, TX/RX switch440, low noise amplifier (LNA)445, time-gain compensator (TGC)450, and analog to digital convert (ADC)455. Signals digitized by ADC455may then be stored in storage module510C for further processing by control and computation block510. FIG.4, illustrates configuration of the general control subsystem ofFIG.5for different transducer elements. Imaging element415A is coupled to electronics429A that enable transmit and receive operations as discussed above. TX amplifier430A is coupled to TX/RX switch440A which is then coupled to MUX435A before coupling to element415A through cable225A, as shown by the dashed arrows. For receive operations, the ultrasound signal passes through MUX435A, TX/RX switch440A, LNA445A, TGC450A, and ADC455A, as shown by solid arrows. The operation of both transmit and receive functions allows ultrasound imaging to be performed by ultrasound system210. Electronics429B is configured for another imaging element415A opposite to the set described above. Electronics429B may be the same as or similar to electronics429A. The components included in electronics429B are identified by references that include the suffix ‘B’. Element415B is an example of an element that is configured to deliver ultrasound energy to facilitate treatment. In this example, element415B is configured to only deliver ultrasound to target regions. Element415B does not require electronics to enable it to receive echo signals. Therefore, element415B is shown to be coupled to electronic components429C that only enable transmit operations. Here, TX amplifier430C is coupled directly to MUX435C which is then coupled via cable225C (illustrated by dotted arrows) to element415B. A set of electronics429D similar to electronics429C is configured for another treatment element415B. The components included in electronics429D are identified by references that include the suffix ‘D’. Forming subsets of elements which may be configured so that all elements in a subset perform the same operation of either transmit only or transmit and receive may be advantageous. InFIG.4, all elements within subsets420A and420B may be configured to transmit and receive, while all other elements, such as those in subsets425A and425B, may be configured to transmit only. An advantage that is offered by this approach is that certain subsets may be configured to optimally transmit and receive to form images, while other subsets may be configured to optimally only transmit to promote opening the blood-brain barrier. In some embodiments, the elements in the various subsets may be operated with different parameters such as, but not limited to, transmit frequency, and transmit bandwidth. Elements in these various subsets may be designed differently and behave differently. Relatively higher ultrasound frequencies have been shown to experience lower amounts of attenuation and be effective for imaging certain portions of the brain. As such, in a non-limiting example, subsets of elements that form images may have a higher frequency response (e.g. centered at 2 MHz). In contrast, relatively lower ultrasound frequencies applied to the brain can selectively increase the permeability of the blood-brain barrier. As such, in a non-limiting example, subsets of elements used to deliver ultrasound energy to facilitate treatment may have a lower frequency response (e.g. centered at 0.5 MHz). In some implementations treatment elements are driven at frequencies in the range of 0.25 MHz to 5 MHz. In some implementations imaging transducer elements are driven at frequencies in a frequency range of about 1.75 MHz to 10 MHz. In the example shown inFIG.4and in certain other example embodiments of this invention, the operation of one or more transducer elements that can transmit and receive, along with the operation of one or more transducer elements that can only transmit, is advantageous. Such configurations allow more transducer elements to be supported by fewer electronic circuits. As an example, transmit only elements require less electronics. Although it is desirable to have transmit only elements along with elements that transmit and receive, the systems and methods described herein do not preclude other configurations. In some embodiments, the same element can be operated in a “transmit only” mode, with one set of parameters when delivering ultrasound energy to facilitate treatment as well as in a “transmit and receive” mode, with another set of parameters when imaging. FIGS.6A and6Billustrate an example embodiment in which one or more elements are coupled to one or more sensors such as, but not limited to, angle sensors, pressure sensors, thermal sensors, proximity sensors, electroencephalogram (EEG) sensors, and slippage sensors. Such sensors may be provided optionally and beneficially in ultrasound transducer assembly220. InFIG.6A, element group600comprises two ultrasonic elements415, which are mounted on a common mechanical sub-structure605. The orientation of mechanical substructure605may be controlled by any of various mechanisms. In this example embodiment, the orientation of mechanical sub-structure605is controlled by linear actuators comprising motors610(e.g. stepper motors, servo motors) or other linear actuators, only one of which is labelled for clarity. Each motor610may be coupled to a lead screw615, only one of which is labelled, whose position may be controlled by a corresponding motor610. Thus, by controlling the position of each lead screw615independently, the orientation of elements415may be controlled. Other implementations may use other types of linear actuators. Sensors620and625are also shown. In the illustrated embodiment sensors620and625are embedded within cover630, which may allow ultrasonic energy to pass through it. Cover630may also serve to separate the elements and skin, protecting each one from the other. In some embodiments, one or more of sensors620and625may be used to measure and report the orientation of the group of elements back to the ultrasound system. In these and other embodiments other sensed parameters may optionally be reported back to the ultrasound system. In this example embodiment, two sensors are shown, but more or fewer sensors may be provided. FIG.6Ashows that the four motors610(e.g. stepper motors, servo motors or other rotary actuators) are coupled to mechanical structure635. Mechanical structure635may provide the structure of ultrasound transducer assembly220.FIG.6Bshows a case where three instances of element group600are coupled to mechanical structure635, which forms or is a part of ultrasound transducer assembly220. Electrical connections to the elements and the sensors are not illustrated in the figures for the sake of clarity. The spatial position of element group600within the structure of ultrasound transducer assembly220may be known to the ultrasound system from outputs of sensors attached to each element group600and/or from known locations of the transducer elements included in element group600. The capability to measure and control the orientation of elements within ultrasound transducer assembly220is advantageous as it facilitates orienting elements in desired configurations, such as normal or nearly normal to the surface of the skull. This orientation is known to reduce or remove the possibility of mode conversion between longitudinal and shear waves at the surface of the skull. In some embodiments, the orientation of elements may be adjusted automatically. A pre-op image may be used to assess the angularity of the skull (e.g. by determining a tangent plane) at any location, and through the process of registration, as discussed herein the angularity of any section of the skull may be known. An element may thus be automatically adjusted to be oriented at a desired angle with respect to the skull using this knowledge. This capability is also advantageous because it permits ultrasound transducer assembly220to accommodate differently shaped heads. In some embodiments, motor610may advance or retract one or more lead screws615to position one or more ultrasound elements415or element groups600such that ultrasound transducer assembly220conforms to the shape of a patient's head. It will be appreciated that motor610can be any type of linear actuator operable to advance or retract element(s)415, such as a stepper or servo motor. Robotically Positioned Transducers FIG.7Ashows an ultrasound system700according to an example embodiment which features a robotic manipulator (in this example provided by electromechanical arms). In system700, ultrasonic elements are in element housings705A,705B and705C (any individual element housing herein referred to as element housing705, or collectively as element housings705). Element housing705and the elements contained within it may collectively be called a transducer710(e.g.,710A,710B,710C). Each transducer710can include one or more transducer elements. The elements can be arranged in any of various configurations such as, but not limited to, linear, in a 2D array format, randomly distributed, in a plane, in a 1D convex or concave shape or in a 2D convex or concave shape. The elements may be built on a structure that makes it possible to attain flexible shapes of the surface of the elements. One benefit of such a capability is that it may be possible to match or closely match the surface of the skull over the region of a footprint of the housing that is in contact with the skull. This capability may be achieved, for example, with mechanisms as shown inFIGS.6A and6B. Transducer710may comprise one or more element groups600, and may implement methods for positioning the elements as described above. Elements supported by element housing705may all be capable of transmitting and receiving, or may only be connected for transmitting. It is also possible for both types of elements to be present within element housing705. Each transducer710may be coupled to an electromechanical arm715or other movable support that is capable of positioning the corresponding housing705in one or more degrees-of-freedom (DOFs). In some embodiments, arms715are capable of positioning the corresponding transducers710in 6 DOFs. Arms715A,715B, and715C (any individual arm herein referred to as arm715, or collectively as arms715). It is to be noted that although three arms are illustrated, configurations with more or fewer arms715are possible. In addition to being able to position transducer710, arm715may support electrical cables or other pipes or lumens. The pipes or lumens may carry fluids such as, but not limited to, ultrasound coupling gel. In some embodiments the pipes or lumens are arranged to dispense ultrasound coupling gel at the interface between a transduce710and a patient. The electrical cables, pipes, or lumens may, for example, be carried in a conduit that extends along an arm715. In some embodiments, the conduit is located within arm715. The position and orientation of each arm715may be manually or robotically adjusted. In a non-limiting example, inverse kinematics may be used to determine the angle of each joint of a mechanical arm to achieve a desired position for transducer710. Arms715A,715B, and715C are shown to be coupled to arm control units720A,720B, and720C respectively (any individual arm control unit herein referred to as arm control unit720, or collectively as arm control units720). Arm control units720may contain electrical or electromechanical systems operable to control the orientations and positions of arms715. The details of such electromechanical systems are generally well known and are therefore not provided here. Control signals that control the position and orientation of arms715may originate from peripheral controls and I/O module510G and be sent from ultrasound system700to each arm control unit720. Cables that carry these control signals are illustrated by the dashed lines labeled725A,725B and725C. It will be appreciated that several cabling and electronic configurations are possible, andFIG.7Ashows a non-limiting example. In one example embodiment, MUX435shown inFIG.5may be physically placed in ultrasound system700. In another example embodiment, MUX435may be placed within an arm control unit720. Each arm control unit720may be coupled to a mechanical ground such as, but not limited to, a free-standing support structure, railing of beds, and support structures coupled to chemotherapy chairs. The use of a mechanical ground may help provide support such that the position and orientation of arms715may be controlled. Just as inFIG.2, ultrasound system700may also be coupled to an electronically controlled IV drug delivery system230. Sensors of many types may be associated with ultrasound transducers. For example, such sensors may include one or more of:one or more pressure sensors which measure forces between a transducer and a patient;one or more position sensors;one or more electroencephalography sensors (EEG) to measure electrical activity of the brain;etc. The construction of such sensors and how they may be attached to transducers is explained in further detail with reference toFIG.8. Information collected from sensors may be sent via cables725A,725B, and725C to control and computation block510. Parameters such as, but not limited to, ultrasound parameters (gain, frequency, etc.), or control signals to control the position of arms715may be generated in response to the received sensor information. In an example embodiment, a contact angle sensor in contact with the patient's skull is coupled to a transducer710. The sensor may report the angle of the head at the skull at the point of contact to computation block510, which allows peripheral controls and I/O module510G to generate control signals. These control signals may be sent to arm control unit720C and may comprise the commands necessary for arm control unit720C to execute the commands and move arm715C in such a way that element housing705C is oriented at the desired position and angle relative to the skull. In some embodiments, once positioned at the desired configuration, arms715may automatically reposition themselves if the patient moves. This automatic repositioning may include repositioning element housings705such that the same region of the brain may be insonated or imaged regardless of the motion of the patient. Ultrasound system700may be programmed such that if the target area or volume being insonated or imaged is different by a certain threshold then certain actions are triggered. In a non-limiting example, this threshold is triggered when a threshold proportion or amount (e.g. 1%) of the target area or volume is different from a reference target area or volume. In some example embodiments, ultrasound images are obtained as described herein periodically or continuously and all or some features of a current ultrasound image are compared to corresponding features of a previous ultrasound image. An action may be triggered if a value of a metric indicative of differences between the current and previously acquired ultrasound image crosses a threshold. The action(s) that are triggered may include, but are not limited to, stopping the imaging or treatment session, automatically trying to reposition transducer710so that the same area is addressed (within a threshold), or asking for an authorized human operator to intervene to manually reposition transducer710(e.g. by pausing the session and providing instructions to the operator through a user interface). Ultrasound system700may include the capability of adjusting each arm715independently of the other arms715. Alternatively, arms715may be automatically positioned in concert with one another, given information on the shape of the patient's head and its motion. In some embodiments, information about the patient's movements may not be limited to those provided by the sensors within element housings705. Sensors such as, but not limited to, cameras may also be placed in other locations such as the bed, ceiling, the patient, and other freestanding structures. Such sensors may be used to supply information on patient motion. Camera based position tracking systems are commercially available and may be applied to track position(s) and orientations of transducer(s)710and/or the patient's head. FIG.7Ashows inertial measurement unit (IMU) sensor730placed on the patient's head. The reading from this sensor may be sent via cable735to ultrasound system700to be processed by peripheral controls and I/O module510G. If the reading of the patient motion exceeds a threshold value, module510G may calculate new positions for arms715. Calculations regarding change in patient position may be performed continually or on a periodic basis, depending on how ultrasound system700is configured. In an example calculation, an initial position of the patient's head, is obtained and stored along with the position and orientation of a transducer710. Assuming that transducer710is initially at an appropriate location for the function it is configured to perform, any movement of the patient may be recorded. Thus, any change in position of transducer710relative to the patient may trigger a calculation to determine whether a current location of transducer710is still within a threshold of the appropriate target volume. If a threshold is exceeded, then control signals may be sent to arm control units720to reposition element housings705to target the desired volume. Actions other than updating the position of element housings705may also be programmed to take place. In an example embodiment, if the threshold is exceeded by a certain amount, actions such as stopping scanning, or providing warning messages may be performed by ultrasound system700. Ultrasound system700may provide certain advantages over ultrasound system210in some scenarios. For example, ultrasound system700may require fewer transducer elements for operation, as transducer elements may be dynamically positioned during treatment. Furthermore, variations in patients' anatomy, namely head size and shape, can make fabrication of an ultrasound transducer assembly220suitable for use with a range of patients difficult. Manually Positioned Transducers FIG.7Billustrates an ultrasound system750having yet another configuration. Ultrasound system750is similar to ultrasound system700and also comprises one or more transducers which may image and/or deliver ultrasound energy to facilitate treatment. In system750one or more, ultrasound transducers may be placed at appropriate positions on a patient's head manually by a person. In some embodiments, 6 DOF sensors may be coupled to transducers760A and760B. Such sensors may allow for the positions and orientations of transducers760A and760B to be tracked and communicated to ultrasound system750. Although not illustrated, patient movements may be monitored in this configuration just as described inFIG.7A. In other embodiments, a single transducer may be provided which an operator could appropriately place in one or more positions to both perform imaging and facilitate treatment. Sensors such as position or pressure sensors may be used beneficially with ultrasound system750. As an illustrative example, providing position sensors on transducer760B would allow ultrasound system750to compare the actual location of transducer760B to a desired location. This would allow for further feedback and instructions to be provided to a user to adjust its position. FIG.8shows an example construction for coupling one or more sensors to a transducer. In this example, rigid sleeve805fits tightly over transducer810. Sleeve805supports one or more sensors. Sensors815A,815B, and815C (any individual sensor herein referred to as sensor815, or collectively as sensors815) may be placed on sleeve805as pictured, or wherever else appropriate. The rigidity of sleeve805allows transducer810and sensors815to remain stationary relative to each other once sleeve805is fitted over transducer810. In this example embodiment, sleeve face820is not level with the surface of transducer810. However, the two faces may be in the same plane in other embodiments. A sensor, such as sensor815C, may be a pressure sensor, measuring the pressure with which the transducer presses against the skin of a patient. As shown, transducer810may be electrically connected to transducer cable825. Similarly, sensors815may be connected to sensor cable830. In the present example, sensor815C may output pressure data through sensor cable830to ultrasound system700or750, which may then be received by control and computation block510. Modules within control and computation block510may compare the received pressure data to a range of desired pressures. The data on the range of desired pressures may be stored in storage module510C. If the received pressure data falls outside the desired range, certain actions may be triggered. These actions include, but are not limited to, showing a warning through user interface515, and if transducer810is coupled to an electromechanical arm715such as one shown inFIG.7A, controls may be sent to arm control unit720to alter the position of transducer810to obtain a pressure within the desire range. Establishing a Common Frame of Reference It may be advantageous to establish a common frame of reference to describe measurements of location and orientation of the various sensors in a common coordinate system. A common frame of reference is a convenient but arbitrarily chosen coordinate system having an origin and orientation to which all images and locations can be referred. For example, in the configuration illustrated inFIG.7A, a coordinate system740may be located relative to arm control unit720A. This coordinate system may then be used as a frame of reference for all other position and orientation related measurements (the “common frame of reference”). In the configuration illustrated inFIG.7B, coordinate system790may be used to establish the common frame of reference. In both of these examples, an origin of the frame of reference is located at a mechanical grounds (745and795, respectively). The frame of reference in the configuration shown inFIG.4is represented by coordinate system460. This frame of reference is different from the ones shown inFIGS.7A and7Bin that it is not mechanically grounded. Coordinate system460can move if the patient moves his or her head. However under the assumption that ultrasound transducer assembly220and head410are not moving relative to each other, this type of frame of reference is equally valid and appropriate and results in no additional computational complexity. Some implementations provide systems and methods for establishing a common frame of reference using a position sensing system. Various types of position sensing systems may be utilized such as, but not limited to, electromagnetic (EM) based system or optical based systems. FIGS.7B and8show an example embodiment which uses an EM transmitter785to determine positions of sensors associated with ultrasound transducers. For example, EM sensors may be placed on sleeve805and reference EM transmitter785may be able to establish the transducer's position and orientation in a coordinate system defined relative to reference EM signal generator785. Thus, if multiple transducers were present (as shown inFIG.7B), and each transducer is coupled to one or more EM sensors, the position and orientation of each of the transducers may be found in relation to the frame of reference, and subsequently, in relation to each other. Knowledge of the position(s) and orientation(s) of transducers may be used optionally and beneficially with the methods of placing transducers in an appropriate location as discussed above. Obtaining Ultrasound Images Returning to example method300inFIG.3, after pre-op MRI or CT images of the head are obtained and imported into an ultrasound system in step305, real time imaging of at least a portion of the patient's head is performed in step310. It is desirable to obtain an image of parts of the patient's brain which include certain structures within the brain. As described previously, certain structures may be imaged through low attenuation acoustic windows using ultrasound. As such, in some embodiments, ultrasound transducers used to form images may be positioned at these low attenuation acoustic windows. Ultrasound imaging may then be performed, and structures visible in these images may be selected to serve as the ultrasound image reference region. The process of selection may be accomplished by segmentation as explained below. To illustrate how this may be performed with the configuration inFIG.4, ultrasound transducer assembly220may be positioned on patient P's head such that subsets420A and/or420B are adjacent to patient P's temples. Using the knowledge about the general anatomy of the skull, ultrasound transducer assembly220may be constructed such that when a patient wears the assembly, subsets configured to image are positioned adjacent to one or more low attenuation acoustic windows. In the configuration shown inFIG.7A, instructions may be provided by ultrasound system700to control arms715such that imaging transducer710A and/or710B are positioned adjacent to low attenuation acoustic windows. In the configuration shown inFIG.7B, a human operator may manually position transducer760A such that it is placed appropriately at one of these windows. Sensors on transducer760A may detect whether the desired location has been reached and may give feedback to the operator if adjustments are required. Reference Region Selection In step315, the image of the structure seen in the ultrasound image reference region obtained in step310may be identified in the pre-op MR or CT scan. As an example, this may involve a human operator selecting this structure in a slice of the pre-op image dataset or in a 3D model constructed from the pre-op image dataset. Again, this selection may be accomplished by segmentation. The region in both the pre-op image(s) and the ultrasound image containing the common structure will be collectively referred to as “reference regions”. In some embodiments, reference regions may comprise one or more of the following structures: the circle of Willis, ventricles, and/or the corpus callosum. Utilizing the common structure identified in step315, the images of the ultrasound scan can be registered to the pre-op MRI or CT scan in step320. Registration The registration process may use one or more features of the reference regions. As an example, registration may be performed by matching the shape of the reference region in both the ultrasound and the pre-op modality. Other characteristics may be used, such as the orientation of the reference region relative to an expected orientation, or if two or more reference regions exist, the relative orientation of the two or more reference regions etc. As an example, the circle of Willis typically has a distinctive shape that generally appears as an irregular hexagon or a rough circle in an ultrasound image. However, regardless of the shape, because the same anatomy is imaged by the two modalities, a strong correlation may exist between the images of the reference regions in the two modalities. FIG.9illustrates an example method for accomplishing the registration in step320. At step320A, a common frame of reference may be chosen by establishing a coordinate system as described above. In step320B, the structures of the ultrasound images may be located within the selected coordinate system. The distance of the imaged structure from the transducer may be calculated based on the travel time of ultrasound echoes and the location and orientation of the transducer are known, which allows for step320B to be accomplished. Step320C involves placing the pre-op MR or CT scan in the coordinate frame by using the same reference regions present in the ultrasound and the pre-op images. FIG.10illustrates an example registration process. Coordinate system1000may be selected in step320A. Coordinate system1000may be arbitrarily chosen to be coupled to a mechanical ground, such as arm control unit720A. Ultrasound image1005is represented by dashed lines at a location and orientation relative to imaging transducer1010. This serves to illustrate the relationship between ultrasound image1005and imaging transducer1010that created the image. As shown, P1(X1,Y1,Z1,α1,β1,ϕ1) may represent the location and orientation of the origin of ultrasound image1005and may also represent location and orientation of the center of imaging transducer1010's face of transducer elements. Variables x, y and z may indicate the coordinates within coordinate system1000while variables α, β, and ϕ may indicate the roll, pitch and yaw within coordinate system1000. Step320B may then be completed by placing ultrasound image1005within coordinate system1000, with its origin at P1(X1,Y1,Z1,α1,β1,ϕ1). The variables X1,Y1,Z1,α1,β1,ϕ1 may be known from outputs of sensors such as EM sensors of an EM position sensing system that are coupled to the transducer and the accompanying EM transmitter. AlthoughFIG.10illustrates the use of a transducer (e.g. as an element housing and the elements contained within), other configurations are not excluded. For example, these methods may be performed with ultrasound transducer assembly220as pictured inFIG.4, where imaging transducer1010may comprise several elements, or subsets of elements, together configured to form images (e.g. subset420A). A reference region, pictured by1015inFIG.10, such as the circle of Willis, may be imaged by imaging transducer1010through a low attenuation acoustic window. Step320C of method320may be performed in this example embodiment by placing the pre-op images, represented by the volume1020, within coordinate system1000. Software for registration step320C may be implemented utilizing registration module510E. FIG.11is a flow chart illustrating an example method which includes further actions that may be taken to perform step320C to place pre-op images into the reference coordinate system. At step320C1, assuming that live imaging of the patient is being performed with an ultrasound imaging system, the live ultrasound imaging may be stopped and an appropriate frame containing the image of the reference region is selected. Following this, it is desirable to find an appropriate slice within the pre-op images that best corresponds to the image of the reference region in the selected ultrasound frame is selected. FIGS.12A and12Billustrate an example registration process using a reference region. Slices1205through the skull depict one set of slices through the image data set acquired by the pre-op modality. For reference, slices1205may represent volume1020inFIG.10. Dashed lines1210represent the boundary of an ultrasound image and may correspond to ultrasound image1020inFIG.10. It should be noted that other slices of the pre-op images and other orientations of the ultrasound plane may be obtained, and the example shown is only one possible configuration. Reference region1215is represented in this example by an oval in the patient's brain, and may correspond to1015inFIG.10. At step320C2of method320C, an initial test frame is selected in the pre-op image(s) that closely matches the ultrasound image. An example of such a test frame is illustrated by plane1220inFIG.12B(shown in bold dotted lines). The selection of plane1220may be performed automatically or may be guided by a human. An image may be reconstructed along plane1220from the pre-op image data contained in slices1205. InFIG.12B, the image that is constructed would be in a plane substantially normal to slices1205. However, it is noted that test frames that provide for a constructed image in any number of orientations relative to slices1205may be selected. Method320C continues to step320C3where the correlation between the image produced in step320C2and the ultrasound image frame along plane1210produced in step320C1is found. In step320C4, the correlation between the selected ultrasound image and the reconstructed image along the selected slices1205is found for a range of orientation angles of the test frame and scale factors. This process may be carried out automatically by a computer, but may also be guided by a human in order to converge on a solution in a more expedient manner. After a desired number of permutations of the various transformations are computed, method320C continues to decision block320C5. If all of the correlation values are below a desired threshold, method320C continues to step320C6where the location of plane1220is modified, and steps320C2-320C5are repeated. Conversely, if the correlation values for any of the calculations are above a desired threshold, decision block320C5continues to step320C7. Step320C7attempts to find a slice with an even higher correlation value. Using the example method320C allows for a “best fit” slice to be found. A best-fit slice may be described as a slice of the pre-op images that shows the same structures as seen by the ultrasound image in the same plane and lies closest to the ultrasound image plane. For example, inFIG.12B, the best-fit slice lies along ultrasound plane1210. Having completed the registration process in method300and obtained the best-fit slice, images from the pre-op imaging modality may be located within the reference coordinate system alongside presently obtained ultrasound images. Because the coordinates of the ultrasound image's reference region is known within the coordinate system, the best-fit slice in the pre-op images may be assigned these same coordinates. The coordinates of a region identified in one modality can now be found in the other modality. This allows for a target target region identified in the pre-op modality to be located in the ultrasound image and common coordinate system. In the process of alignment in step320C, operations such as scaling, rotation and transformation may be performed on the pre-op images. The need for these operations may arise due to the nature of the imaging modalities and how the images are acquired. It is also possible that these operations are done on a section by section basis (i.e. each image may be broken down into different sections and a different set of operations may be used on each section). In a non-limiting example to illustrate how scaling may be performed, the selected ultrasound frame in step320C1and the test frame from the pre-op modality in step320C2may contain a different number of pixels. 1 cm2in the ultrasound system may contain 50 pixels, while 1 cm2in an MR image may contain 60 pixels. In this example, the MR image may be downsampled such that both images have the same pixel density. In other embodiments, the ultrasound image may be upsampled or downsampled to conform to a pre-op image's pixel density. In some embodiments, ultrasound images and pre-op images may be processed. Processing may include, but is not limited to, image smoothing, speckle reduction, and edge detection. This processing may be performed optionally but beneficially prior to or during registration step320. Performing these steps to improve image quality may improve the ability to find a best-fit slice. Individual characteristics from each imaging modality may be reduced or removed such that the images from the various modalities may be better compared or utilized in algorithms, for example, to find correlations. In some embodiments, the registration step is carried out plural times using ultrasound images obtained from different low attenuation acoustic windows. Example embodiments of this invention have thus far shown ultrasound transducers or transducer elements situated at the temples to perform imaging. However, the skull is also thinner in areas such as the behind the eyes and in the back of the head, resulting in lower ultrasound attenuation allowing certain brain structures to be imaged by ultrasound. This may be accomplished, for example, by using transducer elements on ultrasound transducer assembly220that are located in the back of the head in the configuration shown inFIG.4, or by positing a transducer in the back of the head in the configuration shown inFIG.7A. By repeating the registration procedure using different ultrasound images, the accuracy of registration may be improved by selecting the ultrasound image/pre-op image pair that produces the highest correlation values. Selection of Target Regions At any point in method300after pre-op images are obtained in step305and before the configuration of subsets and transducers to be used are determined in step330, one or more regions may be selected by a physician in the pre-op image(s) for ultrasound energy to be delivered in step325(defined as “target regions” above). InFIG.10, one such target region is represented by a black dot1025. Through the process of registration, the location of target region1025once it has been selected, is known within coordinate system1000. Once registration in step320has occurred, the coordinates of the target region may be known, and this may be provided to one of the various ultrasound system configurations discussed herein. It may be advantageous to select target regions prior to beginning treatment of the patient in some scenarios. For example, a physician could perform this step prior to treatment. Without the time constraints that exist while treating a patient, more careful consideration of the target region(s) could result in better outcomes. Calculation of Delivery Subset In step330of method300, depending on the configuration of the ultrasound system, calculations are made to either find the subset of elements or to find the position and orientation of a transducer (either may be referred to as a delivery subset) that may be utilized to deliver ultrasound energy to the target region(s). The goal in doing so is to allow ultrasound to be delivered to one or more locations at which it is desired to promote the opening of the blood-brain barrier to allow drugs to enter brain tissue. These calculations may be used, for example, to determine where a second transducer, such as transducer1030inFIG.10, should be positioned in order to insonate the target region. In this example, if the calculations result in a position and orientation P2(X2,Y2,Z2,α2,β2,ϕ2) within reference coordinate system1000, transducer1030may be placed at P2 in order to insonate target region1025. In some embodiments, a number of factors may be taken into account in performing the calculations mentioned above. Relevant factors include, but are not limited to, distance between the target region and the transducer elements, attenuation of intervening tissue, orientation of the skull, and frequency characteristics of the transducer elements. Additionally, certain goals may be assigned. Example goals may include selecting a delivery subset that can open the blood-brain barrier to allow drugs to be delivered with the least amount of acoustic power, or in another example, in the shortest amount of time given a specific acoustic power setting. These factors may influence the calculations in different ways, and may individually interact with one another. For example, choosing a subset that is closest to the target region may not always be the optimal choice. The shape of the skull adjacent to these subsets may be such that it is at an angle to the plane containing the target region that results in significant mode conversion. As a result, sufficient energy may not be deposited at the target regions. In this example, it may be more desirable to select a subset that is farther away from the target region, but where less mode conversion will occur. In some embodiments, the distance between a target region and the delivery subset may be calculated using the pre-op images. Because the pre-op images are registered within the common frame of reference which includes the position of each element (regardless of its use for imaging or for facilitating treatment) and the location of the target region, the intervening distance may be easily obtained. In some embodiments, the attenuation of intervening tissue at a certain point on the surface of the patient's head may be calculated using the pre-op images. Analysis of the pre-op images may reveal different layers of tissue between transducer elements and the target region. By segmenting these layers either automatically or manually, each layer may be associated with attenuation parameters based on a priori data. Thus, it is possible to know the attenuation that may be experienced for different delivery subset positions. This information may be applied to influence the choice of the delivery subset and/or to set amplitude or other transmit parameters. 3D Model Generation A 3D computerized model of the patient's head may be generated by the ultrasound system within head model generation module510H. This model may be generated based on the registered pre-op images within the common frame of reference. Patient head movement during treatment helps to illustrate a useful aspect of this concept. When patient head movement occurs, the movement can be tracked by various sensors as described elsewhere herein. The location of the model within the frame of reference may be recalculated to reflect the new position within the reference coordinate system. This provides an advantage of not having to perform the registration steps320every time the position of the patient's head is changed. The head model may have various degrees of sophistication. For example, the 3D model may only include the outline of the skull corresponding to the outermost layer of the skin. A more sophisticated example 3D model may include the outline of the skull and the thickness of the skull. An even more sophisticated example 3D model may include the various layers of the brain, including estimations of sound velocity in the various layers of brain tissue. The use of a computerized model is beneficial because it facilitates calculations and transformations, examples of which are discussed below. FIG.13illustrates an example application of a head model in determining delivery subsets. A simple model1300may include information about the outermost layer of head410. Within this model, the location of target region1310may be known. In this non-limiting example of a calculation for the selection of a treatment subset, the distance from a region on the outermost layer of the skull to target region1310is the only factor taken into account. Distances from regions A, B and C on the surface of the head to target region1310are indicated by lines1320,1330, and1340, respectively. In this example, region B has the shortest distance and thus, a subset of elements around region B may be chosen as the treatment subset. In some embodiments, the computation of which region on the surface of the head has the shortest distance to target region1310may be performed by computation of transmit subsets and transmit parameters module510A. Each said region in this example embodiment may comprise one or more transducer elements. An example embodiment of the calculation of the size of the treatment subset of transducer elements is now provided. In a simple example, the size of the delivery subset may depend on the minimum acoustic power needed to open the blood-brain barrier. This minimum acoustic power may be known a priori through experimentation or other means. Another relevant factor that may influence the size of the subset is the effects of beam propagation. Each transducer element has an angular directivity which may be dictated by a number of factors such as its size and frequency of operation. Thus, elements that are very angular relative to the target region may not be chosen for inclusion in the delivery subset. An example of software steps that may be implemented by module510A to determine the treatment subset in step330of method300are described herein. First, the software may request that the human operator provide a set of goals and relevant factors, such as delivering energy to a region of the brain with a certain amount of acoustic power. These goals and factors may be presented in the form of a drop down menu, checkboxes, or radio buttons to allow the operator to choose from one or more options. The software may then generate a model of the patient's head using the pre-op images and head model generation module510H. Given the head model, and the prescribed goals, the location and orientation of ultrasound transducer elements that may be used to deliver ultrasound energy to the target region is calculated. This step may involve a process of optimization where the one or more goals and factors are parameterized, and the optimization process involves selecting a configuration yielding the highest “score”. The parameterization process may optionally and beneficially take into account user assigned weights. The final selection of the delivery subset may be made manually be the human operator or automatically by the software. In ultrasound system210(seeFIG.4), following the selection of the delivery subset, a subset of elements in ultrasound transducer assembly220may be selected to operate in treatment mode. This may involve providing instructions to all elements within subset425A, for example, to begin transmitting ultrasound at a given transmit delay and frequency. In the configuration shown inFIG.7A, control signals may arise from peripheral controls and I/O module510G to inform the angle that each joint of an arm715should be positioned. The final result should result in arm100's end effector (i.e. where the transducer is located) to be at the calculated position. In the configuration shown inFIG.7B, a human operator may manually position transducer760B such that it is placed at the desired location with guidance from software in ultrasound system750. Sensors on transducer760B may detect whether the desired location has been reached and may give feedback to the operator if adjustments are required. Where the calculations and selection are performed automatically, in addition to all of the factors discussed above, control and computation block510may be guided by goals such as, but not limited to, selecting a subset or a transducer that can open the blood-brain barrier to allow drugs to be delivered with the least amount of acoustic power, or in the shortest amount of time given a specific acoustic power setting. Multiple Subsets In some embodiments two or more subsets are generated. Each subset comprises one or more transducer elements that may be excited in a coordinated manner so that the ultimate effect is to open the blood-brain barrier at region(s) where their beam patterns intersect. Further, the subsets need not be contiguous. One advantage of multiple non-contiguous subsets is that power delivered to the intervening tissue can be minimized, while delivering the required power at one or more target region(s). The calculation of which subsets are to be chosen to deliver ultrasound energy to a target region may depend on a number of factors such as, but not limited to, the number of target regions and the size of each region. If the target region is small, it is possible that a subset with contiguous elements may be selected. On the other hand, even for a small region, if it is determined that the intervening tissue may be at potential risk (perhaps due to high acoustic power being needed for a target region located far from the transducer elements), then non-contiguous subsets may be more appropriate. Where there are multiple target regions, or target regions that are large (that can subsequently broken up into multiple smaller regions), each region may be associated with its own calculations and its own delivery subset(s). In the configurations illustrated inFIGS.7A and7B, it is noted that it is possible that a subset of elements that is fewer than the total elements present in a treatment transducer may be chosen. The size of this smaller subset may be chosen in a manner similar to the methods described for ultrasound transducer assembly220inFIG.4. Alternative Determinations of Delivery Subset In some embodiments, the delivery subset may be pre-determined, or it can be found via reference to a look up table (LUT). As an example, gliomas are a common type of brain tumor that often develops in the brainstem. Therefore, it may be advantageous to construct ultrasound transducer assemblies (such as one shown inFIG.4) where treatment transducer elements are localized around the back of the head to deliver ultrasound to the region of the blood-brain barrier that is closest to the brainstem. No calculations would have to be performed in this scenario to determine the subset of treatment elements to be used. This may reduce the cost of construction and maintenance of the device, as well as reduce the computational complexity of the systems used during treatment. In other embodiments, an ultrasound transducer assembly may include several elements and a determination of the subset of elements to be used may be established through reference to a LUT. For example, based on empirical analysis from a priori experimentation and analysis on a patient's pre-op image, a LUT can provide data indicating where on a patient's head is ultrasound energy most likely to be able to reach a target region. A subset/subsets may then be chosen based on this result to deliver the treatment. In configurations where a transducer is being used (i.e.FIGS.7A and7B), reference to a LUT may be used to obtain the desired positions and orientations based on relevant factors such as the ones above. In other embodiments, all available transducer elements may be configured to deliver ultrasound energy for treatment. Amplitude or other transmit parameters may be determined for each treatment based on a number of factors. These factors may include, but are not limited to, the distance from an element to the target region, the angle between the produced ultrasound beam and the target region, and the properties of intervening tissue. Where it is undesirable to insonate the target region using a certain elements, such elements may be set to transmit at an amplitude of near 0 dB. Transmit Modes In the various configurations of the systems described above, and in equivalent configurations, some transducer elements may be operable to transmit and receive whereas some other transducer elements may only have the ability to transmit.FIG.14Aillustrates an example waveform produced and received at a transducer element that is capable of operating in an imaging mode. In this mode, the element can both transmit and receive ultrasonic energy. Two graphs are illustrated in this figure—one for transmit operations and another for receive operations. In region A, the element is excited by a two-cycle pulse at a frequency of 2 MHz at amplitude P. This is followed by region B, where the element receives echo data from the skull as a result of the transmission. After a period of time, the element is excited again. This cycle is repeated as necessary to form images. FIG.14Billustrates an example waveform produced by an element that is capable of operating in a treatment mode. This element is shown to be excited by a much lower frequency, such as 0.5 MHz, and is also excited for a much longer time (8 cycles as shown in the figure). In this mode, the element does not need to receive any data and therefore does not form any images. Elements that are only capable of transmitting may be operated only in a treatment mode, while an element that is capable of both transmitting and receiving may be operated in both imaging and treatment modes. There are several advantages to configuring transducer elements such that some are capable of transmitting and receiving, while others may only transmit. One advantage is the cost for implementing systems described herein may be reduced by making some elements capable of only transmitting. Here, the electronics and the processing needed for receiving and processing ultrasound echo data need not be included for these elements. It may be advantageous to place elements that that operate only in treatment mode (i.e. transmitting only) adjacent to areas where the attenuation of the skull is high and where imaging will typically not be performed. On the other hand, it may be advantageous to have elements that operate only in imaging mode (i.e. transmitting and receiving) in some situations. As will be explained elsewhere herein, these elements may be used to monitor the delivery of drug. Further, having elements that can both operate in imaging mode and treatment mode can also be advantageous in some situations. It has been stated that most humans have low acoustic attenuation acoustic windows adjacent to the temples. If the target regions were in the vicinity of these areas, the same elements that form images may also be best suited to insonate the target regions. Contrast Agent Imaging Contrast imaging is a technique used in ultrasound imaging to enhance signal from the tissue. In contrast imaging, micro-bubbles are injected into the circulatory system. When insonated by ultrasound energy, provided that the bubbles do not break, the bubbles vibrate and reflect back energy at harmonic frequencies. Thus in a typical case, if the energy of transmission is at a frequency of f0, the bubbles reflect back energy at f0, and at other frequencies such as 2f0. The reflections from the bubbles are quite strong compared to typical reflection from tissue interfaces. Images can thus be formed of the areas where the tissue is vascularized. For the purposes of imaging the brain and providing treatment, microbubble techniques may be modified and adapted as described below. In some embodiments, different types of microbubbles are used. In a non-limiting example where two types of microbubbles are used, one type of microbubble may be referred to as “imaging microbubbles” and the other may be referred to as “treatment microbubbles”. For greater clarity, the microbubbles used in connection with opening the blood-brain barrier are referred to as treatment microbubbles. In some embodiments, these microbubbles may be supplied to the patient through IV system230pictured in the ultrasound systems depicted inFIGS.2,7A and7B. FIG.15Ashows an example method1500illustrating the use of imaging microbubbles. Initially, in step1505, the patient is injected with imaging microbubbles. The microbubbbles travel to the brain where they can be used to facilitate imaging the brain. For the sake of simplicity, it is assumed that all transducers/transducer elements in this example are capable of operating in both imaging and treatment modes. In step1510, images of the brain are obtained using ultrasound transducers with the aid of the imaging microbubbles. In this example, the transmit parameters may be chosen such that the imaging microbubbles are caused to vibrate non-linearly. As the signals from the microbubbles are typically strong, it may be possible to use higher receive frequencies than is typically used for imaging the brain. In a non-limiting example, the imaging transducer may operate at frequencies of 2 MHz transmit and 4 MHz receive. Other combinations are also possible. When performing imaging with the imaging microbubbles, other transmit and receive parameters such as, but not limited to, transmit power, transmit and receive apodization, receiver gain and receiver filter, may also be adjusted accordingly. These parameters, in particular the transmit parameters, may be chosen such that the imaging bubbles are stable and do not break for a sufficient period of time before imaging can be performed. Some parameters that are known to have an effect on microbubble stability are transmit power and transmit frequency. Thus for imaging the brain, a low power transmission may be used that is delivers sufficient energy to the targeted regions but not high enough to break the microbubbles during the imaging period. Once images of the brain are obtained in this manner, in step1515, the target region or regions may be selected (using the methods described for step325of method300, for example). Prior to this, however, the images obtained using the imaging microbubbles may be used in registering the pre-op image(s) to the ultrasound image (i.e. step320of method300). The use of microbubbles in ultrasound imaging compensate for the attenuation of the skull, increasing resolution and penetration depth. Therefore in some embodiments, imaging subsets or transducers are not required to be placed adjacent to low attenuation acoustic windows to form images of the brain when appropriate microbubbles are used. In some cases signals from the imaging microbubbles are strong enough that, target regions may be selected based entirely on the ultrasound image. For example, it may be possible that a target region or tumor in the brain is highly vascularized. In these scenarios, it may be possible to discern these areas in the ultrasound image when imaging microbubbles are used. Treatment Microbubbles After the target regions are selected in step1515, method1500continues to step1520where one or more transducers, or subsets of transducer elements are selected and/or positioned for delivering ultrasound energy to a target region. The methods for accomplishing this may be substantially similar to the ones described above for step330of method300for the different ultrasound system configurations. Once the subset(s) of elements are selected, or treatment transducer(s) are appropriately positioned, in step1525, the patient is injected with treatment microbubbles. In some embodiments, the treatment microbubbles may contain the drug. In other embodiments, the drug may be injected independent of the treatment microbubbles, where the microbubbles serve to assist in the opening of the blood-brain barrier, but not to deliver drugs themselves. Now in box1530, the subset or subsets of elements that were chosen to be in the treatment mode, are activated and the treatment microbubbles are caused to vibrate violently and or break, causing the blood-brain barrier to open up to allow the passage of drugs. The treatment microbubbles and the imaging microbubbles may be different in a number of ways, and a partial list of these differences is now provided. Microbubbles can be manufactured so that they respond to different frequencies. For example, microbubbles that break at lower frequencies may be used in conjunction with treatment transducers. Certain microbubble characteristics such as, but not limited to, its size and content may impact its response. In some embodiments, imaging microbubbles may be air or liquid filled whereas treatment microbubbles may be filled with a drug. Other differences may exist and these differences may be exploited to allow for the selection of ultrasound system parameters such that the microbubbles facilitate imaging or treatment delivery operations. Method1550inFIG.15Billustrates another variation of the concept described above where the imaging and treatment microbubbles are the same. At step1555, the patient is also injected with microbubbles. Method1550continues to step1560where images of the brain are obtained by choosing the ultrasound transducer transmit parameters such that the microbubbles facilitate imaging and do not to disrupt the blood brain barrier. These transmit parameters may include, but are not limited to, a transmit power at or below a certain threshold, transmit frequency, burst length, and pulse repetition frequency. The target regions are selected in step1565followed by the selection of subset or subsets of elements that are to be paced in treatment mode in step1570. Now in step1575, the drug, and optionally, more microbubbles, is injected into the circulatory system. In step1580, transmit parameters of the delivery subset are adjusted so that the microbubbles vibrate more violently and the blood-brain barrier is opened so that the drug may be delivered. Associate Drugs to a Transmit Sequence In some embodiments, while a delivery subset is operating in treatment mode, the transmit parameters may specifically be selected based on the drug being delivered. As an example, through prior knowledge and/or experimentation, it may be known that drug A is delivered optimally with a 10 cycle pulse at 0.5 MHz with a pulse repetition frequency of 1 KHz for a period of 10 minutes, while drug B is delivered optimally with a 15 cycle pulse at 0.25 MHz with a pulse repetition frequency of 1.2 KHz for a period of 15 minutes. These optimal values may depend on a number of factors such as, but not limited to, results obtained in animal trials, results obtained in human trials, knowledge of drug composition, knowledge of microbubble composition, and body habitus of the patient. In a non-limiting implementation of this concept, an imaging and treatment system may include a bar code reader, a scanner, or another type of input device that reads information from a label on the container of a drug. Once the system reads this information, the system can access memory storage such as a LUT where an optimal set of transmit parameters are stored for one or more drugs. This set of parameters may include one or more parameters such as, but not limited to, frequency, transmit voltage, pulse length and pulse repetition frequency. When instructed to do so, the imaging and treatment system may access this stored information and operate ultrasound transducers in accordance with these parameters. In other embodiments, an authorized medical personnel may manually enter relevant information such as, but not limited to, information about the drug and body habitus. In some embodiments, the system can be programmed such that if drug information is not entered by one of the methods described above or by any other method, then ultrasound transducers are prevented from operating in treatment mode. According to a more specific embodiment, requiring information about the drug also allows the use of this action to enable a billing function. The system may send a report or send an email about how the responsibility for reimbursement for the drug and/or treatment is to be shared amongst various parties. Reducing or Eliminating Standing Waves Standing waves within the cranial cavity can present a significant risk to patients. Standing waves may be created during both imaging and treatment modes. Using non-uniform pulse repetition intervals may reduce or eliminate the possibility of creation of standing waves. While this method may be advantageous for imaging operations, it may not be ideal for the treatment mode. As explained above, drugs may be most effectively delivered with a certain set of transmit parameters. This may involve exposing the target region to ultrasound for a certain amount of time. In some embodiments, to provide the necessary exposure time and to reduce or eliminate the possibility of generation of standing waves, different subsets (such as subset425A inFIG.4) of elements may be used to deliver energy. FIG.16illustrates an example embodiment of how different subsets may be chosen to insonate the same target region. This method may facilitate educing or eliminating standing waves while delivering the required dosage of ultrasound to the target region. Target region1605is initially insonated with transmit treatment pulses by subset1610for a period of time. The transmit delays of the elements of this subset are chosen such that region1605is insonated with subset1610. The focusing of the ultrasound energy emitted by the elements in subset1610is depicted by dashed lines1620that extend from subset1610to target region1605. At a subsequent but successive time, subset1615may be used to transmit treatment pulses for another period of time. The focusing of the ultrasound energy emitted by the elements in this subset is depicted by the solid lines1625. AlthoughFIG.16shows a region of overlap between subsets1610and1615, configurations where there is no overlap may also be utilized. Although only two subsets are illustrated in the figure, more than two subsets may be utilized in the performance of this technique. In some embodiments, software that implements the above method may request the user to input the number of treatment subsets to be used. Goals for each subset may be further specified. Factors that may be programmed include, but are not limited to, maximum allowable off-axis angle, maximum time each subset may be active, and maximum amount of subset overlap. For example, an entered set of goals may be to find subsets that are within 3° of the shortest distance to the target region (which may be specified as 0°) and with the least amount of overlap. Given these example instructions, the software may find subsets whose centers are 3° or less from the shortest distance and then determine the appropriate size of these subsets, the order in which they are active, and the transmit parameters for each subset. In some embodiments, such instructions may be entered as part of the software programming process for an ultrasound system. In other embodiments, a user may provide instructions for an ultrasound system through a user interface. Once the instructions have been provided to the ultrasound system, systems control module510F may generate and provide control signals to the selected subsets to transmit ultrasound energy to the patient. Although illustrations of this method refer to subsets of transducer elements, the same concepts may be applied to configurations utilizing transducers, such as those shown inFIGS.7A and7B. In these cases, multiple individual transducer elements capable of operating in treatment mode may be grouped together to accomplish these methods. In other embodiments, the groups of treatment elements may be located in different transducers. Building a Volumetric Image In conventional ultrasound imaging systems, 3D or 4D images are typically created by a 2D transducer with the elements being in the same plane, a 1D array being “wobbled” in an elevation dimension, or a 1D rotating transducer such as those used in transesophageal imaging. Building a volumetric image may be desirable in the ultrasound systems described herein because a more representative model of the brain's structures allow for higher accuracy in performing comparisons with pre-op images during the registration process. In another example, where the resolution of structures obtained by ultrasound imaging is high enough (e.g. through the use of microbubbles), a volumetric ultrasound image may allow for a target region to be selected. However, in the configuration shown inFIG.4for example, the elements within ultrasound transducer assembly220are arranged differently compared to typical transducer arrangements. Therefore, different scanning techniques may be required to form 3D or 4D images with the systems described herein. In some embodiments, different imaging planes may be used with the same subset of elements. In any subset of elements containing multiple elements, the elements can be arranged electronically in various ways such as, but not limited to, in an array configuration or in a concave curvilinear configuration. The orientation of the curvilinear configuration may also be chosen as desired. An example non-limiting embodiment of this concept is shown inFIG.17. Here an array of elements1700is illustrated. This array of elements may be part of the subset420A of ultrasound transducer assembly220inFIG.4, for example. As each of the elements within this array may be independently cabled and therefore controllable via an ultrasound system, various scanning planes may be achieved electronically. Two such example groups are illustrated by1710and1720. Any element that does not lie completely within the dashed lines may be excluded from the group. The electronic delays calculated by the modules510A and510B may be such that the scanning planes of each of groups1710and1720are perpendicular to the plane ofFIG.17(into the page), but parallel to the long side of the boundaries of these groups. Through these methods, multiple scanning planes may be generated. The images obtained from these scanning planes would interrogate different anatomical planes. Thus by generating multiple planes along different scanning planes, a volume may be scanned and a volumetric image be generated. In other embodiments, different imaging planes may be used to fill in missing volumetric information. These different imaging planes need not have the same orientation or angle or location with respect to each other. However, because an image plane's 6DOF position is known in a common reference frame, it can be placed alongside other images in the same coordinate system such that a volumetric image can be constructed. This method to construct a volume may be advantageous because data about the entire volume of the brain is often incomplete. Partial volume reconstruction as described allows for the available image data to be put towards meaningful uses. In a more specific embodiment, interpolation of RF echo data can be used in a volume construction data set to fill-in missing data elements. Synthetic Aperture Imaging Any suitable ultrasound imaging technology may be applied. Example, ultrasound imaging technologies include beamforming technologies and synthetic aperture imaging technologies Synthetic aperture imaging allows for the formation of an image with fewer transducer elements compared to what is needed in a fully populated aperture. The concepts of synthetic aperture imaging may be modified for the purposes of producing ultrasound images of the brain. In some embodiments, different transducer elements may be used for each transmit operation. After each transmission, the echo may be received at multiple elements, and the echo data may be digitized and stored. An ultrasound system may then process the various pulse-echo response pairs to synthesize and construct a higher resolution image than would normally be possible with the number of elements involved. Another advantage that can be gained by using synthetic aperture imaging is that as different elements are being used for transmission, the possibility of generating standing waves is reduced or eliminated. Methods of synthetic aperture imaging described herein may be applied to all of the various ultrasound system configurations discussed. Insonating Regions Close to the Surface of the Skull Additional challenges are encountered when attempting to insonate target regions close to the surface of the skull. For example it is difficult to position elements such that they are parallel to the surface of the skull at the point of contact and still direct energy to the target region. Focusing the ultrasonic energy may also be challenging due to the low frequency and short distance between the target region and the elements. In some embodiments, subsets of elements that are relatively distant from the target region are be chosen such that these subsets are parallel or nearly parallel to the skull at the point of contact. In a non-limiting example, a target region on the left side of the head near the ears may be insonated by a subset of elements from the right side of the head. Using appropriate transmit parameters, the blood-brain barrier on the left side can be made to open up to allow drugs to pass. In addition to location, transmit parameters such, as but not limited to, frequency and power may be adjusted to insonate such target regions from a distance. In another embodiment, transducers (such as those shown inFIGS.7A and7B) may be positioned at a distance away from the skull. A stand-off material may be placed between said transducers and the skull such that the two components may remain coupled. Stand-off materials are typically soft and gelatinous. The stand-off material may be selected such that essentially no attenuation of ultrasound occurs through the material. In a non-limiting example, the speed of sound through this material may be 1540 m/s. Additionally, the thickness of the material may be anywhere between 1 cm to 5 cm. Through this technique, the transducers can be placed in a parallel or nearly parallel manner to the skull. As the distance between the target region and the elements is larger, issues of focusing at short distances are minimized or removed. It should be noted that these methods of operating transducer elements close to a target region are equally applicable to transducers or subsets operating in imaging mode. Stand-off materials may also be used optionally and beneficially in configurations where there are one or more transducer elements dispersed over an assembly (such as that shown inFIG.4). In some embodiments, a layer of stand-off material may be placed around and in contact with a patient's head. Ultrasound transducer assembly220, for example, may be placed over the material, with transducer elements415in contact with it. The properties and thicknesses of the stand-off material may be selected such that it is able to be in contact with elements415and does not exert a reaction force high enough that the ability to control the orientation of elements415is inhibited. This allows for standard sized ultrasound transducer assemblies to be used across a range of patient head sizes. Subset Specific System Parameters In some embodiments, each subset or transducer may be operated with its own set of parameters such as, but not limited to, transmit parameters, receive parameters, number of elements, element configuration, aperture dimensions. In a non-limiting example, a subset operating in treatment mode in the vicinity of the temporal bone may operate at a higher frequency, for example 2 MHz, compared to a subset near the top of the head that may operate at 0.5 MHz. Similarly, the active aperture near the temporal bone may include elements that are generally within a circle of radius 20 mm whereas the aperture near the top of the head may generally be rectangular in shape with dimensions of 70 mm by 10 mm. Subset or transducer parameters may be calculated automatically or manually. In some embodiments, automatic calculations may be carried out by control and computation block510. These calculations may include information about the location of the subset/transducer on the skull and may calculate parameters based on this information. Thus, parameters may be optimized for each subset or transducer depending on location and what imaging or treatment goals have been defined. Regardless of whether the subset or transducer parameters are calculated manually or automatically, transmit and receive parameter computation modules510A and510B may ensure that no safety limits such as acoustic or thermal limits are violated. Application of these concepts may allow for the accommodation of the local conditions of the skull, the tissue in the vicinity of the elements, and the tissue that is insonated. Fiducials Some patients may have objects in their head, such as screws or dental implants, that may be observable in ultrasound, MRI, and CT scans. In some embodiments, such objects may be used as fiducial markers to improve the accuracy of registration. In a non-limiting example, a dental implant may be imaged by one or more transducer elements. If the location of the transducer element(s) were known, an ultrasound transducer assembly (see ultrasound transducer assembly220inFIG.4) or transducer can be placed in relation to the dental implant. If the location of the same implant is known in a pre-op image, then registration of an ultrasound image to the pre-op image may be performed against the dental implant. This method offers an alternative to registering pre-op and ultrasound images that does not require transducer elements to be located specifically at low acoustic attenuation windows. Confirmation of Dose Delivery Contrast agents such as microbubbles may be used to facilitate the delivery of drugs and to obtain confirmation that a drug has been delivered. In some embodiments, ultrasound transducers or subsets may be interspersed with elements that are specially tuned to receive energy released from microbubbles as the blood-brain barrier opens up. This energy may be in the sub-harmonic range or the harmonic range. In a more specific embodiment, one or more receive only elements are tuned to detect these specific frequencies to confirm the delivery of an ultrasound dose and/or drug. Display and User Interface FIG.18illustrates a non-limiting example of a display and user interface1800that may be provided to an authorized person such as a doctor to interact with an example ultrasound system described above. In this configuration, the display may contain four windows,1810,1830,1850and1870. In window1810, the ultrasound image from a low attenuation acoustic window may be displayed. Image1815is obtained from a low attenuation low attenuation acoustic window with circle of Willis1820pictured inside it. In window1830, pre-op image1835may be simultaneously displayed. Pre-op image1835may be a 3D rendering of images obtained from MRI scans, for example. The circle of Willis labelled in the pre-op image as1840, but physically it is the same structure as1820in ultrasound image1815. As initial steps not shown in this illustration of a user interface1800, the user may be requested by the ultrasound system to import pre-op images of the patient's brain. The user may then be instructed to place imaging transducers at low attenuation acoustic windows to allow the ultrasound system to collect images. These steps relate to steps305and310of method300respectively. In other embodiments of a user interface, instruction and confirmation messages may be provided by the ultrasound system. A suitable user interface may be provided to allow the user to register the two images.FIG.18illustrates workflow window1850with a number of user selectable boxes1855,1860, and1865. The user may select box1855to begin the process of registration of an ultrasound image with the pre-op image(s). After this selection, user interface1800may ask the user to select an ultrasound image that may be used for registration. If live imaging through a low attenuation acoustic window is being performed, the system may halt live imaging and provide the user the option of scrolling through images in window1810that have been just recently acquired and stored, and choosing an image from this set of images. In a conventional ultrasound imaging system, the storing function is called the “cine” function. Once an appropriate image is selected, the user may be asked to outline a structure within the image (such as circle of Willis1820in this example). This process is called segmentation. Segmentation may be carried out automatically, manually, or in a combination of the two. In the present example, a doctor may provide an initial outline of the circle of Willis1820. Ultrasound systems described herein may optionally and beneficially provide a color Doppler imaging mode to aid the segmentation process. Operating in this mode allows for blood flow in the circle of Willis to be detected and displayed, making the boundaries of the vessels that form the circle of Willis easily distinguishable. The user can therefore easily draw an outline of the Circle of Willis using the colored vessels against the black and white image of the rest of the brain tissue. The ultrasound system may proceed with the next step, or alternatively use the manually drawn boundaries as a basis for segmentation algorithms programmed internally to generate a more accurate segmentation. This selection process relates to step310of method300. Once segmentation is complete, user interface1800may request for the selection of a plane that most closely matches the segmented image. Again, this may be carried out manually, automatically or in a combination of the two. In an automatic process, the ultrasound system may look within pre-op image1835to try and find an image plane that most closely matches the segmented ultrasound image. In a combined operation, user interface1800may display a fused ultrasound and MRI image in window1830, after automatically determining a plane within the MR image, and ask the user to make a determination on if that match was acceptable. Window1870may be used to display messages and accept inputs from the user. Selectable boxes such as accept box1875and reject box1880may be displayed to accept inputs from the user. The user may therefore indicate acceptance of the match between the ultrasound and pre-op image, or may direct the ultrasound system to continue to find a better match. Other user interface elements may also be provided to the user to guide the matching process. The selection of a desirable plane in the pre-op image may be used as the starting point in the registration process (this may reflect step320C2in method320C, for example). As the ultrasound system performs the process of registration, it may display a status indicator in message window1885. Message window may notify the user when registration is completed and/or provide further instructions. Subsequently in window1850, user interface1800may ask the user to select a target region in box1860, corresponding to step315in method300. As described previously, the target region may be chosen in the MR image. In this illustration, the user may point to a region such as region1845and select it as the target region. It should be noted that this step may be performed at any point after a pre-op image has been obtained. Its inclusion in these steps only serves to illustrate how it may be performed alongside other steps in a common user interface. The ultrasound system may automatically select a region around the area selected by the user. This region selection may be guided by preprogrammed data entered into the ultrasound system a priori. Alternatively, user interface1800may allow the user to modify the boundary of the automatically selected target region, or to select it entirely manually. After the user indicates they are satisfied with the size and shape of the target region (through the use of message box1885and boxes1875and1880in window1870), user interface1800may then ask the user to begin the treatment. This process reflects an example embodiment of step325of method300and may provide the coordinates of the target regions as output. Start treatment button1865, which may have previous been greyed out, may become selectable, and when selected may trigger a number of actions. After starting treatment, the ultrasound system may calculate the delivery subset as explained above in step330of method300. Following this, the ultrasound system may move a treatment transducer to the appropriate location on the head in the configuration shown inFIG.7A, or select the treatment subset(s) in the configuration shown inFIG.4. Where a treatment transducer is controlled manually, such as in the configuration shown inFIG.7B, the current treatment transducer coordinates may be displayed in the message window1885along with the desired calculated coordinates. When the user has correctly placed the transducer at the desired coordinates, message window1885may display a message indicating this. Other methods of guiding the user to the desired location are possible. Once the treatment elements are selected or the treatment transducer is in the appropriate location, the ultrasound system may send signals to the IV pump and treatment transducer elements to coordinate the timing of the delivery of ultrasound with the operation of the IV pump. Depending on signals obtained from the transducer and/or other sensors, user interface1800may display when the blood-brain barrier has opened up, or other relevant status updates. This information may be calculated in real-time or may be based on empirical analysis from a priori experimentation. A message may be displayed in message window1885once a drug has been delivered indicating that treatment is completed or that the user should proceed to treat another region. Interpretation of Terms Unless the context clearly requires otherwise, throughout the description and the“comprise”, “comprising”, and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to”;“connected”, “coupled”, or any variant thereof, means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof;“herein”, “above”, “below”, and words of similar import, when used to describe this specification, shall refer to this specification as a whole, and not to any particular portions of this specification;“or”, in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list;the singular forms “a”, “an”, and “the” also include the meaning of any appropriate plural forms. Words that indicate directions such as “vertical”, “transverse”, “horizontal”, “upward”, “downward”, “forward”, “backward”, “inward”, “outward”, “vertical”, “transverse”, “left”, “right”, “front”, “back”, “top”, “bottom”, “below”, “above”, “under”, and the like, used in this description and any accompanying claims (where present), depend on the specific orientation of the apparatus described and illustrated. The subject matter described herein may assume various alternative orientations. Accordingly, these directional terms are not strictly defined and should not be interpreted narrowly. Implementations of the invention may comprise any of specifically designed hardware, configurable hardware, programmable data processors configured by the provision of software (which may optionally comprise “firmware”) capable of executing on the data processors, special purpose computers or data processors that are specifically programmed, configured, or constructed to perform one or more steps in a method as explained in detail herein and/or combinations of two or more of these. Examples of specifically designed hardware are: logic circuits, application-specific integrated circuits (“ASICs”), large scale integrated circuits (“LSIs”), very large scale integrated circuits (“VLSIs”), and the like. Examples of configurable hardware are: one or more programmable logic devices such as programmable array logic (“PALs”), programmable logic arrays (“PLAs”), and field programmable gate arrays (“FPGAs”)). Examples of programmable data processors are: microprocessors, digital signal processors (“DSPs”), embedded processors, graphics processors, math co-processors, general purpose computers, server computers, cloud computers, mainframe computers, computer workstations, and the like. For example, one or more data processors in a control circuit for a system as described herein or for an ultrasound machine as described herein or for a module as described herein may implement methods as described herein by executing software instructions in a program memory accessible to the processors and/or by processing data according to logic configured in a logic circuit or configurable device such as an FPGA and/or by processing data in an ASIC or other logic circuit configured to perform the method steps described herein. A group of modules as described herein may be implemented using separate hardware (e.g. separate processors and/or configurable logic circuits and/or hard-wired logic circuits) but two or modules may also share some or all of a hardware platform. For example two or more modules may be implemented by common data processor(s) and/or configurable logic circuits and/or hard-wired logic circuits configured by software instructions or otherwise to perform the functions of each of the two or more modules. Processing may be centralized or distributed. Where processing is distributed, information including software and/or data may be kept centrally or distributed. Such information may be exchanged between different functional units by way of a communications network, such as a Local Area Network (LAN), Wide Area Network (WAN), or the Internet, wired or wireless data links, electromagnetic signals, or other data communication channel. While processes or blocks are presented in a given order, alternative examples may perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or subcombinations. Each of these processes or blocks may be implemented in a variety of different ways. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks may instead be performed in parallel, or may be performed at different times. Some aspects of the invention may be provided in the form of a program product. The program product may comprise any non-transitory medium which carries a set of computer-readable instructions which, when executed by a data processor, cause the data processor to execute a method of the invention. Program products according to the invention may be in any of a wide variety of forms. The program product may comprise, for example, non-transitory media such as magnetic data storage media including floppy diskettes, hard disk drives, optical data storage media including CD ROMs, DVDs, electronic data storage media including ROMs, flash RAM, EPROMs, hardwired or preprogrammed chips (e.g., EEPROM semiconductor chips), nanotechnology memory, or the like. The computer-readable signals on the program product may optionally be compressed or encrypted. In some implementations, the invention may be implemented in software. For greater clarity, “software” includes any instructions executed on a processor, and may include (but is not limited to) firmware, resident software, microcode, and the like. Both processing hardware and software may be centralized or distributed (or a combination thereof), in whole or in part, as known to those skilled in the art. For example, software and other modules may be accessible via local memory, via a network, via a browser or other application in a distributed computing context, or via other means suitable for the purposes described above. Where a component (e.g. a software module, processor, assembly, device, circuit, etc.) is referred to above, unless otherwise indicated, reference to that component (including a reference to a “means”) should be interpreted as including as equivalents of that component any component which performs the function of the described component (i.e., that is functionally equivalent), including components which are not structurally equivalent to the disclosed structure which performs the function in the illustrated exemplary implementations of the invention. Specific examples of systems, methods and apparatus have been described herein for purposes of illustration. These are only examples. The technology provided herein can be applied to systems other than the example systems described above. Many alterations, modifications, additions, omissions, and permutations are possible within the practice of this invention. This invention includes variations on described implementations that would be apparent to the skilled addressee, including variations obtained by: replacing features, elements and/or acts with equivalent features, elements and/or acts; mixing and matching of features, elements and/or acts from different implementations; combining features, elements and/or acts from implementations as described herein with features, elements and/or acts of other technology; and/or omitting combining features, elements and/or acts from described implementations. It is therefore intended that claims hereafter introduced are interpreted to include all such modifications, permutations, additions, omissions, and sub-combinations as may reasonably be inferred. The scope of the claims should not be limited by the preferred implementations set forth in the examples, but should be given the broadest interpretation consistent with the description as a whole.
117,322
11857813
DETAILED DESCRIPTION The present technology is directed toward systems and methods for selectively disrupting tissue with HIFU. In several embodiments, for example, an ultrasound source can pulse HIFU waves toward a volume of tissue that includes fibrous structures of an extracellular matrix (“ECM”). The pulsed HIFU waves can lyse cells in the tissue volume while allowing the ECM to remain at least substantially intact. In certain embodiments, the HIFU treatment can be used to decellularize a tissue mass to form a scaffold that can later be used for regenerative medicine and/or other applications. The term “ECM” is used herein to describe the non-cellular fibrous and lattice structure of tissue composed of proteins, polysaccharides, and other molecules. For example, ECM can include the walls of blood and lymphatic vessels, dermis, fascia, neural sheaths, portal and binary structures in livers, the Bowman's capsule, glomerular membranes, ghosts of tubules, collecting ducts in kidneys, and other non-cellular tissue structures. Additionally, the term “target site” is used broadly throughout the disclosure to refer to any volume or region of tissue that may benefit from HIFU treatment. Certain specific details are set forth in the following description and inFIGS.1-5to provide a thorough understanding of various embodiments of the technology. For example, several embodiments of HIFU treatments that destroy tissue are described in detail below. The present technology, however, may be used to destroy multi-cellular structures other than tissue. Other details describing well-known structures and systems often associated with ultrasound systems and associated devices have not been set forth in the following disclosure to avoid unnecessarily obscuring the description of the various embodiments of the technology. A person of ordinary skill in the art, therefore, will accordingly understand that the technology may have other embodiments with additional elements, or the technology may have other embodiments without several of the features shown and described below with reference toFIGS.1-5. FIG.1is a partially schematic view of a HIFU system100configured in accordance with an embodiment of the present technology. The HIFU system100can include an ultrasound source102operably coupled to a function generator104and, optionally, an amplifier106. The ultrasound source102can be an ultrasound transducer that emits high levels of ultrasound energy toward a focus120. The focus120can be a point, region, or volume at which the intensity from the ultrasound source102is the highest. For example, the ultrasound source102generally has a focal depth equal to the diameter of the ultrasound transducer. The function generator104(e.g., an Agilent 33250A function generator from Agilent of Palo Alto, CA) and the amplifier106(e.g., an ENI A-300 300 W RF amplifier from ENI of Rochester, NY) can drive the ultrasound source102to radiate HIFU waves and that induce boiling bubbles or cavitation proximate to the focus120to mechanically damage the tissue. Accordingly, the HIFU system100can implement a pulsing protocol in which ultrasound frequency, pulse repetition frequency, pulse length, duty cycle, pressure amplitude, shock wave amplitude, and/or other parameters associated with the HIFU emissions can be adjusted to generate HIFU waves to mechanically disrupt tissue. As described in further detail below, the HIFU system100can also selectively disrupt the tissue in the treatment volume to emulsify or lyse cells, while preserving the ECM for subsequent cell regrowth. In various embodiments, the ultrasound source102can include a single-element device, a multi-element device, an extracorporeal device, an intracavitary device, and/or other devices or systems configured to emit HIFU energy toward a focus. For example, the ultrasound source102can be part of a Sonalleve MR-HIFU system made by Philips Healthcare of The Netherlands and/or a PZ 26 spherically focused piezoceramic crystal transducer made by Ferroperm Piezoceramics of Kvistgaard, Denmark. In certain embodiments, the ultrasound source102can have a frequency of approximately 0.5-20 MHz. For example, the ultrasound source102can have a frequency of about 1-3 MHz (e.g., 1.1 MHz, 1.2 MHz, 2 MHz, 2.1 MHz, etc.). In other embodiments, however, the frequency of the ultrasound source102can be higher than 20 MHz or lower than 0.5 MHz. In further embodiments, the source102can have different frequencies, aperture dimensions, and/or focal lengths to accommodate other therapeutic and diagnostic applications. As shown inFIG.1, the ultrasound source102, the function generator104, and/or other components of the HIFU system100can be coupled to a processor or controller108(shown schematically) that can be used to control the function and movement of various features of the HIFU system100. In certain embodiments, the function generator104and the controller108can be integrated into a single device. The controller108can be a processing device, such as a central processing unit (CPU) or computer. The controller108can include or be part of a device that includes a hardware controller that interprets the signals received from input devices (e.g., the ultrasound source102, the function generator104, user input devices, etc.) and communicates the information to the features of the HIFU system100using a communication protocol. The controller108may be a single processing unit or multiple processing units in a device or distributed across multiple devices. The controller108may communicate with the hardware controller for devices, such as for a display that displays graphics and/or text (e.g., LCD display screens—not shown). The controller108can also be in communication with a memory that includes one or more hardware devices for volatile and non-volatile storage, and may include both read-only and writable memory. For example, a memory may comprise random access memory (RAM), read-only memory (ROM), writable non-volatile memory, such as flash memory, hard drives, floppy disks, CDs, DVDs, magnetic storage devices, tape drives, device buffers, and so forth. A memory is not a propagating electrical signal divorced from underlying hardware, and is thus non-transitory. In certain embodiments, the controller108can also be coupled to a communication device capable of communicating wirelessly or wire-based with a network node. The communication device may communicate with another device or a server through a network using, for example, TCP/IP protocols. The controller108can execute automated control algorithms to initiate, terminate, and/or adjust operation of one or more features of the HIFU system100and/or receive control instructions from a user. The controller108can further be configured to provide feedback to a user based on the data received from the HIFU system100via an evaluation/feedback algorithm. This information can be provided to the users via a display (e.g., a monitor on a computer, tablet computer, or smart phone; not shown) communicatively coupled to the controller. In various embodiments, the HIFU system100can further include a positioning device110coupled to the ultrasound source102to aid in positioning the focus120of the ultrasound source102at a desired target site in the tissue. For example, the positioning device110can include a three-axis computer-controlled positioning system made by Velmex Inc. of Bloomfield, NY The positioning device110can also manipulate the ultrasound source102to move the focus120to different regions in the tissue to mechanically damage larger portions of the tissue112. In other embodiments, the HIFU system100can include additional devices and/or some of the devices may be omitted from the HIFU system100. In operation, the ultrasound source102is positioned proximate to a volume of tissue112(e.g., an organ), and the focus120of the ultrasound source102is aligned with a target site within the tissue112using the positioning device110. For example, the ultrasound source102can be positioned such that its focus120is a depth within an ex vivo or in vivo organ (e.g., a liver, kidney, heart, and/or other tissue mass) and aligned with a tumor, cancerous tissue region, and/or other volume of tissue that a clinician would like to mechanically damage. HIFU energy can be delivered from the ultrasound source102to the target site in the tissue112in a sequence of pulses (e.g., coordinated by the function generator104and/or the controller108) rather than continuous-wave HIFU exposures, which can reduce undesirable thermal effects on the surrounding tissue. Larger target sites can be treated by scanning the focus120of the ultrasound source102over the treatment region (e.g., using the positioning device110) while pulsing HIFU energy toward the tissue112. In various embodiments, the HIFU system100can deliver a pulsing protocol to provide boiling histotripsy that mechanically fractionates the tissue. During boiling histotripsy, the ultrasound source102propagates millisecond-long bursts of non-linear HIFU waves toward the focal region120in the tissue112, and the accumulation of the harmonic frequencies produces shock fronts proximate to the focal region120. This results in rapid heating of tissue and boiling bubbles at the focal region120that liquefy and otherwise mechanically damages the tissue. In certain embodiments, the function generator104can initiate a pulsing protocol to generate shock waves with peak amplitudes of approximately 30-150 MPa at the focus120. For example, the shock wave amplitudes may be 35 MPa, 40 MPa, 45 MPa, 50 MPa, 55 MPa, 60 MPa, 65 MPa, 70 MPa, 75 MPa, 80 MPa, 85 MPa, 90 MPa, 95 MPa, 100 MPa, 105 MPa, 110 MPa, 115 MPa, 120 MPa, 125 MPa, 130, MPa, 135 MPa, 140 MPa, 145 MPa, 150 MPa, and/or values therebetween. In other embodiments, the shock wave amplitudes may differ depending, at least in part, on the power driving the ultrasound source102. FIGS.2A and2B, for example, are graphs illustrating focal pressure waveforms produced using the HIFU system100ofFIG.1. InFIG.2A, the ultrasound source has a power of 250 W and, as shown in the graph, produces HIFU waves having a peak positive pressure of about 79 MPa, a peak negative pressure of about −11.8 MPa, and a shock amplitude of about 80 MPa. InFIG.2B, the ultrasound source has a power of 600 W and, as shown in the graph, produces HIFU waves having a peak positive pressure of 96 MPa, a peak negative pressure of about −17.3 MPa, and a shock amplitude of about 110 MPa. In other embodiments, the peak positive pressure, the peak negative pressure, and/or the shock amplitude of the HIFU waves may differ depending upon the power of the ultrasound source, the frequency of the ultrasound source, and/or the parameters of the pulsing protocol effectuated by the function generator104(FIG.1). For example, the peak positive pressure can be about 30-125 MPa (e.g., 40 MPa, 50 MPa, 60 MPa, 70 MPa, 80 MPa, 90 MPa, and/or pressure values therein), and the peak negative pressure may be about −30 MPa to −3 MPa (e.g., −20 MPa, −15 MPa, −10 MPa, −5 MPa, and/or pressure values therein). The ultrasound source can have power levels between about 100 W to 5 kW (e.g., 200 W, 300 W, 400 W, 500 W, 600 W, 700 W, 800 W, 900 W, 1 kW, 2 kW, 3 kW, 4 kW, 5 kW, and/or power values therein), or higher. Referring back toFIG.1, absorption of ultrasonic energy occurs primarily at the shock front (i.e., at the shock amplitude shown inFIGS.2A and2B), and induces rapid heating of the tissue112that can boil the tissue112within milliseconds. Depending upon the power driving the HIFU system100and the acoustic parameters of the tissue112, the time-to-boil is generally less than 100 ms (e.g., 0.1 ms, 0.5 ms, 1 ms, 5 ms, 10 ms, 15 ms, 30 ms, 40 ms, 50 ms, etc.). For example, the HIFU system100can be configured such that the duration of each pulse is at least equivalent to the time necessary to induce tissue boiling at approximately 100° C. Therefore, during each pulse, one or more boiling bubbles can be formed in the tissue112. In several embodiments, the boiling bubbles can have cross-sectional dimensions of approximately 2-4 mm. In other embodiments, however, the boiling bubbles can be larger or smaller. For example, the boiling bubbles in the tissue108can have a cross-sectional dimension between approximately 100 μm and approximately 4 mm (e.g., 0.2 mm, 0.5 mm, 1 mm, 2 mm, 3 mm, etc.) on the order of the beam-width of the ultrasound source102at the focus120. The superheated vapor of the boiling bubbles provides a force pushing outward from the bubble. This repetitive explosive boiling activity and interaction of the ultrasound shock waves with the boiling bubbles emulsifies the tissue112at the target site to form a liquid-filled lesion largely devoid of cellular structure, with little to no thermal coagulation within the treated region. For example, in certain embodiments, the HIFU system100can deliver a pulsing protocol in which each pulse has a length of 1-15 ms (e.g., 1 ms, 5 ms, 10 ms, etc.), and at least 5 pulses of HIFU energy are delivered at each treatment site at a frequency of 1-10 Hz (1-10% duty cycle) to adequately destroy the desired cellular tissue at the focal region120. In other embodiments, the number of pulses and the pulse length may differ based on the operating parameters of the ultrasound source102, the tissue properties, and/or the desired properties of the lesion. For example, the number of pulses per treatment site can range from 1 pulse to more than 100 pulses (e.g., 2 pulses, 5 pulses, 15 pulses, 40 pulses, etc.), the pulse length can be 0.1-100 ms (e.g., 2 ms, 5 ms, 10 ms, 30 ms, 50 ms, etc.), and the frequency or duty cycle of ultrasound application can be 1-20% (e.g., 2%, 3%, 4%, 5%, 6%, 10%, etc.). In selected embodiments, the pulsing protocol of the HIFU system100can be adjusted to minimize the deposition of the HIFU energy in the tissue112, and thereby reduce the thermal effects (e.g., thermal coagulation, necrotized tissue) of the HIFU treatment. For example, repeating shock waves at a pulse repetition frequency that is slow enough (e.g., approximately 1 Hz or 1% duty cycle) to allow cooling between the pulses such that lesion content within the target site and the surrounding tissue shows minimal to no evidence of thermal denature. In certain embodiments, a duty cycle of less than 10% also allows cooling between pulses that reduces thermal denature. For example, the pulsing protocol can have a duty cycle of 5% of less (e.g., 4%, 2%, 1%, etc.). In other embodiments, the HIFU system100can implement a pulsing protocol that provides cavitation-based histotripsy to mechanically fractionate the tissue112at the focus120. During cavitation histotripsy, the ultrasound source102operates at a relatively low duty cycle (e.g., 1%, 2%, 3%, etc.) to emit microsecond-long pulses of HIFU energy (e.g., 10-20 μs) with high pulse average intensities of 50 W/cm2to 40 kW/cm2that form cavitation bubbles that mechanically disrupt tissue. In this embodiment, the pulses of HIFU waves generated by the HIFU source102have high peak negative pressures, rather than high peak positive pressures used for boiling histotripsy. The peak negative pressure are significantly higher than the tensile strength of the tissue112so as to induce cavitation in the tissue112. For example, the pulsing protocol for cavitation histotripsy can include pulse lengths of 1 μs or longer (e.g., 2-50 μs) and peak negative pressures of about −15 MPa or lower (e.g., −20 MPa, −30 MPa, −50 MPa, etc.). The repetition of such pulses can increase the area of tissue affected by cavitation to create a “cavitation cloud” that emulsifies the tissue. The degree of mechanical tissue damage induced by histotripsy—boiling or cavitation—depends at least in part on the composition of the tissue. In general, more fibrous structures, such as vasculature and stromal tissue, are more resistant to the HIFU-induced mechanical tissue disruption, whereas cells are more easily lysed. As a result, vessels, ducts, collagenous structures, and other portions of the ECM of the tissue108within and surrounding a treatment volume remain at least substantially intact after lesion formation. In addition, the HIFU therapy provided by the HIFU system100can be configured to limit the degree of thermal effect on the ECM. For example, the HIFU therapy can be controlled to reduce or minimize the degree of protein denature of the tissue (e.g, less than 20%, 10%, 5%, 4%, 3%, etc.) during lesion formation. Accordingly, histotripsy can be used to decellularize large tissue volumes while sparing the integrity of the ECM. FIG.3A, for example, illustrates a series of images of rinsed lesions300(identified individually as first through fifth lesions300a-b, respectively) formed in tissue using boiling histotripsy techniques in which the duty cycle and ultrasound source power were varied in accordance with the present technology. The lesions300can be formed using the HIFU system100ofFIG.1and/or other suitable HIFU systems. As shown inFIG.3A, the first through fourth lesions300a-300dwere formed with an ultrasound source (e.g., the ultrasound source102ofFIG.1) set to a power of 250 W with duty cycles ranging from 1% to 10%. The pulsing protocol for first through fourth lesions300a-dhad a pulse length of 10 ms, a pulse repetition of 30 pulses per treatment site, a peak positive pressure of 78 MPa, a shock amplitude of 80 MPa, and a peak negative pressure of −12 MPa. The fifth lesion300ewas formed with an ultrasound source (e.g., the ultrasound source102ofFIG.1) set to a power of 600 W and driven by a pulsing protocol having a 4% duty cycle, 1 ms-long pulses, 30 pulses per treatment site, a peak positive pressure of 100 MPa, a shock amplitude of 110 MPa, and a peak negative pressure of −17 MPa. In other embodiments, lesions can be formed using ultrasound sources having higher or lower power values and/or using different pulsing protocol parameters. InFIG.3A, the ECM (e.g., vessels and connective tissue) is indicated by the light-colored structures extending through the lesions300. In each of the five lesions300, at least a portion of the ECM is not liquefied by the HIFU therapy. However, the images indicate that that smaller vessels and connective tissue of the ECM were less affected by the HIFU therapy when the duty cycle was lower. For example, referring to the lesions300formed by the 250 W ultrasound source (lesions300a-d), the integrity of the vessels and other connective tissue of the ECM are more clearly seen and preserved in the first and second lesions300aand300bwith duty cycles of 1% and 3%, respectively, than in the third and fourth lesions300cand300dwith duty cycles of 5% and 10%, respectively, which illustrate structural damage to vasculature and connective tissue. The fifth lesion300d, which was created using the 600 W ultrasound source and a pulsing protocol having 4% duty cycle has a clearly defined lesion of liquefied tissue, while still at least substantially preserving the structural integrity of the ECM. For example, small caliber blood vessels (e.g., vessels having diameters of less than 50 μm) remained intact in the fifth lesion300e. Accordingly, it is expected that pulsing protocols with lower duty cycles, shorter pulse lengths, and higher peak positive pressures will facilitate the preservation of the ECM during boiling histotripsy. FIG.3Bis a series of histomicrographs of sections of the tissue lesions300ofFIG.3Astained with NADH-d. The NADH-d staining illustrates the degree of thermal damage incurred by the tissue during the boiling histotripsy treatments, with lighter sections (e.g., white) being indicative of thermal damage. Accordingly,FIG.3Bshows that the tissue of the first lesion300acreated using a 250 W source and a 1% duty cycle incurred some thermal damage, but much less thermal damage than the second through fourth lesions300b-300d, which incurred increasingly more thermal damage as the duty cycle increased. As further shown inFIG.3B, the tissue of the fifth lesion300ecreated using the 600 W ultrasound source incurred very little to no thermal damage from the HIFU therapy. Therefore,FIG.3Bfurther shows that a higher power source (that emits higher peak positive pressure) operated at a relatively low duty cycle (i.e., 4 Hz) with short pulse lengths (1 ms) incurred the lowest degree of thermal damage. It is expected that decreasing the degree of thermal damage indicates that the ECM has not been damaged by the HIFU therapy. Accordingly, it is further expected that pulsing protocols having low duty cycles (e.g., less than 5%, 4%, 3%, 2%, 1%, etc.), short pulse lengths (e.g., 1 ms, 2 ms, 3 ms, 5 ms, 10 ms, etc.), and higher peak positive pressures (e.g., 70 MPa, 80 MPa, 90 MPa, 100 MPa, 125 MPa, etc.) will create lesions with limited thermal damage and increased overall preservation of the ECM within the treatment volume. The parameters of the pulsing protocol can accordingly be selected based on acceptable levels of ECM degradation and thermal damage. Referring back toFIG.1, the HIFU systems100can be used to selectively disrupt tissue to lyse cells, while limiting damage to the ECM within a treated tissue volume. Therefore, HIFU therapy can be used to treat volumes of tissue that include vasculature and/or other portions of an ECM that may wish to be preserved after HIFU therapy. For example, HIFU therapy can be used in vivo to emulsify malignant or benign tumors in a volume of tissue (e.g., the prostate, kidneys, liver, and/or other body parts) that includes vessels or other structural features. This is expected to allow clinicians to treat larger tissue volumes using HIFU therapy with less concern for the vessels or other desired ECM structures may lie therein because the ECM remains at least substantially intact even though it is exposed to the focus of the HIFU therapy beam. For example, HIFU therapy can be used to create single lesions of 1-4 cm3or larger (e.g., 5 cm3, 6 cm37 cm3, etc.). Depending on the composition of the treated tissue, the ECM remaining in the lesion after HIFU therapy can include a fibrous, vascularized structure that can serve as a scaffold on which cells can grow. For example, when the HIFU is used to form a lesion in vivo, the HIFU therapy can at least substantially decellularize the treatment volume, leaving only the ECM scaffolding. The body may naturally repopulate the ECM scaffolding with healthy cells to regrow tissue in the region where the diseased cells were previously lysed by the HIFU therapy. In certain embodiments, the tissue re-growth can be supplemented by disposing or injecting cells (e.g., stem cells or cells of the same type of tissue) with or without a carrier or other delivery mechanism (e.g., a gel) on the ECM scaffold to stimulate cell regrowth and regenerate the previously-destroyed tissue mass. In various embodiments, HIFU therapy can be used to at least partially decellularize entire organs or other tissue masses that include an ECM to create a scaffold or structure for regenerative medicine. Because the ECM naturally serves as the structural framework for tissue systems, the use of histotripsy to strip away cells results in a naturally-derived, pre-vascularized three-dimensional support structure for cell regrowth. For example, the HIFU system100ofFIG.1can apply HIFU therapy across an organ ex vivo organ. The function generator104and/or the controller108can implement a pulsing protocol via the HIFU source102to decellularize tissue at the focal region120, and the focal region120can be moved or scanned across different portions of the organ (e.g., using the positioning device110) while the pulsing protocol is implemented to at least substantially decellularize the organ. As discussed above, the ECM of the treated tissue mass will remain at least substantially intact, and therefore the decellularized tissue can serve as a decellularized scaffold that includes the same vasculature, stromal tissue, and/or other structures of the ECM as the organ would in vivo. Unlike the biomimetic scaffolds that are artificially created to mimic the vasculature and structure of an organ or tissue mass and difficult to form due to the complexities mimicking the ECM, the decellularized scaffolds formed using the HIFU methods disclosed herein are pre-vascularized and inherently include the necessary structure for cell growth. Thus, using HIFU methods to form decellularized scaffolds is expected to facilitate the formation of decellularized scaffolds and enhance the ease of cell incorporation because the scaffolds are inherently more similar to natural body structures than artificial scaffolds. In addition, these large structures (e.g., entire organs) can easily be perfused to support the tissue engineered organ as it forms and grows, and can be connected to the native vasculature of the patient upon implantation. In various embodiments, organs or tissue masses can be regenerated by disposing stem cells and/or other cells on the decellularized scaffold (i.e., as defined by the ECM structure) to regrow or regenerate the organ or tissue ex vivo. The regenerated organ or tissue mass can then be implanted into the body of a human patient during a transplant procedure. In other embodiments, the decellularized scaffold can be implanted in the body, and the body itself can form cells on the scaffold to regenerate the tissue mass or organ. In certain embodiments, the growth of cells on the implanted scaffold can be facilitated by disposing or injecting cells (e.g., stem cells) on the implanted scaffold. Due to the bare (i.e., cell-free or substantially cell-free) composition of the decellularized scaffold, the decellularized scaffold (as defined by the ECM) is expected to induce a relatively weak immune response of the host when implanted in the body. Current methods of decellularizing ex vivo organs and other tissue masses require perfusing the organ or tissue with a chemical and/or enzymatic detergent through the organ. Perfusion decellularization, as it is known in the art, generally requires that the organ or tissue be perfused for multiple days, if not more, and can result in alterations or damage to tissues and fibers due, at least in part, to the extended exposure to the chemicals and enzymes. In contrast, the disclosed histotripsy methods can decellularize a tissue mass or an entire organ in significantly less time. For example, the lesions shown inFIGS.3A and3Bwere formed within 20 minutes. In practice, an organ that may take several days to decellularize via perfusion may only take hours to decellularize via histotripsy. In certain embodiments, multiple ultrasound sources102can projected toward different portions of the organ or tissue mass at the same time to decrease the total decellularization time. Accordingly, it is believed that HIFU decellularization of a tissue mass using the techniques described herein is substantially faster (e.g., about 40 times faster) than with perfusion decellularization. In various embodiments, HIFU decellularization can be used in conjunction with or to supplement perfusion decellularization. For example, a tissue mass can undergo HIFU treatment to crudely decellularize the tissue, and then the tissue mass can be perfused using chemical or enzymatic agents to remove any remaining cells. In other embodiments, perfusion decellularization and HIFU decellularization can occur simultaneously to expedite the decellularization process. In any of these combined decellularization methods, the total time to decellularize a tissue mass is substantially reduced from the time it would take to decellularize the tissue mass using perfusion alone. FIG.4is a block diagram illustrating a method400configured in accordance with an embodiment of the present technology. The method400can be implemented with the HIFU system100ofFIG.1and/or other suitable HIFU systems. The method400includes positioning a focus of an ultrasound source in a tissue volume including an ECM, and pulsing HIFU waves from the ultrasound source toward the volume of tissue (block402). The ECM of the tissue volume depends on the type of tissue or organ being treated, and may include vasculature, stromal tissue, collecting ducts, tubules, glomeruli, portal structures, and/or other fibrous, non-cellular structures. As discussed above, the focus of the ultrasound source can be mechanically or manually aligned with the target site in the tissue. The pulsed HIFU waves can be provided in accordance with a predefined pulsing protocol for boiling histotripsy and/or cavitation histotripsy that induces the selective mechanical disruption of tissue. As discussed above, pulsing protocols can include a variety of different factors that can induce millisecond boiling with little to no thermal denature around and in the lesion. For example, a pulsing protocol can take into account the frequency at an ultrasound source, the power of the ultrasound source, peak positive pressure at the focus of the ultrasound source, peak negative pressure at the focus, shock amplitude, pulse length, pulse repetition frequency, and duty cycle. In certain embodiments, for example, the pulsing protocol can have a peak positive pressure of 30-125 MPa (e.g., 40 MPa, 50 MPa, 60 MPa, 70 MPa, 75 MPa, 80 MPa, 90 MPa, etc.), a pulse length of 100 ms or less (e.g., 1 μm, 0.1 ms, 1 ms, 10 ms, 20 ms, etc.), and a duty cycle of 5% or less (e.g., 4%, 3%, 2%, 1%, etc.). In other embodiments, the values of the pulsing protocol parameters can differ and/or the pulsing protocol can include additional factors related to tissue fragmentation using histotripsy. As the HIFU waves are pulsed into the tissue, the HIFU energy can generate shock waves in the tissue proximate to the focus of the ultrasound source to induce boiling in the volume of tissue (block404). The energy from the shock waves can cause boiling bubbles in the tissue within milliseconds. By way of specific examples, shock waves with amplitudes of about 70-80 MPa delivered by an ultrasound source with a power of 250 W can induce boiling bubbles within 10 ms, and shock waves with amplitudes of about 100-110 MPa delivered by an ultrasound source with a power of 600 W can induce boiling bubbles within 1 ms. This rapid millisecond boiling followed by the interaction of shock fronts from the rest of the pulse with the boiling vapor cavity lyses cells without affecting more fibrous structures of the ECM. Accordingly, the method400continues by lysing cells in the volume of tissue, while leaving the ECM at least substantially intact (block406). In various embodiments, the duty cycle, the pulse length, and/or other parameters of the pulsing protocol can be selected to reduce or minimize the degree of damage to the ECM and/or thermal damage to the tissue in and surrounding the lesion. For example, the pulsing protocol can have a duty cycle of 5% or less (e.g., 4%, 3%, 2%, 1%, etc.) and a pulse length of 10 ms or less (e.g., 9 ms, 8 ms, 7 ms, 6 ms, 5 ms, 4 ms, 3 ms, 2 ms, 1 ms, etc.). Because the HIFU method400preserves the ECM, the method400can be used to treat larger tissue volumes and masses, without concern for the ECM that lies therein. For example, the method400can be used to treat a volume of tissue in the liver without destroying the portal structures and vasculature therein. The method400can optionally include scanning a focal region or focus of the ultrasound source across a tissue mass while pulsing HIFU waves and lysing cells (block408). When the treatment site is larger than the focal region of the ultrasound source, the focus of the ultrasound source can be mechanically or manually moved to an adjacent tissue region where the pulsing protocol can again be implemented to lyse cells while at least substantially preserving ECM of the treated tissue region. Accordingly, the method400can be used to decellularize large tissue masses in vivo or ex vivo. The bare ECM remaining after HIFU therapy can provide a naturally-derived pre-vascularized three-dimensional scaffold that can be used to regrow tissue. For example, if decellularized outside of the body, the scaffold can be implanted in the body and injected with cells (e.g., stem cells) to regenerate the tissue or organ. In certain embodiments, the cells may be in or on a carrier (e.g., a gel) and the carrier can be disposed on the decellularized scaffold. Alternatively, the decellularized scaffold can be injected with cells ex vivo to regenerate the tissue or organ, and then can be implanted. In other embodiments, the tissue mass is decellularized in vivo, and healthy tissue can regenerate on the decellularized scaffold (e.g., with or without additional cell injection). FIG.5is a block diagram illustrating a method500of forming a decellularized scaffold configured in accordance with an embodiment of the present technology. The method500can be implemented with the HIFU system100ofFIG.1and/or other suitable HIFU systems and can be performed on a tissue mass in vivo or ex vivo. As shown inFIG.5, the method500can include pulsing HIFU energy from an ultrasound source toward a volume of tissue including an ECM (block502). The HIFU energy can be pulsed in accordance with the pulsing protocols described above to mechanically disrupt the tissue using either boiling histotripsy or cavitation histotripsy techniques. The method500continues by lysing cells of the tissue with the HIFU energy to at least substantially decellularize the volume of tissue (block504). This step leaves the ECM intact such that the ECM can be used to provide a decellularized scaffold that can be used to subsequently regenerate tissue (block504). The emulsification of most or all cells within the tissue volume can occur within minutes. Accordingly, the ultrasound source can be scanned across a tissue mass or organ to at least substantially decellularize the entire tissue mass or organ within a relatively short period of time (e.g., within several hours). Unlike artificial scaffolds used in regenerative medicine, the biomimetic structures formed using this method500provides a naturally-derived pre-vascularized three dimension structure on which new tissue can grow. In addition, the method500is expected to take much less time to decellularize the tissue than chemically-induced perfusion decellularization. As further shown inFIG.5, the method500can optionally include perfusing a volume of the tissue with a perfusion agent (e.g., a chemical and/or enzymatic detergent) to further decellularize the tissue (step506). The perfusion can occur before, during, and/or after the HIFU decellularization. For example, HIFU can be used to partially decellularize an organ ex vivo and the remaining cells can be removed via perfusion. Once the ECM is decellularized, the method500can continue by disposing cells (e.g., stem cells) on the decellularized scaffold formed by the bare ECM to regrow tissue on the decellularized scaffold (block508). In certain embodiments, the decellularized scaffold can be formed ex vivo, implanted in the body of a human patient, and then cells can be injected into the decellularized scaffold to regenerate the tissue on the ECM. In other embodiments, the regrowth of the tissue on the decellularized scaffold is performed ex vivo, and the regrown tissue mass (e.g., an organ) can be implanted in the human body. The bare composition of the naturally-derived scaffold is expected to have only a relatively weak immune response from the host when implanted within the body. EXAMPLES 1. A method of treating tissue, the method comprising:pulsing high intensity focused ultrasound (HIFU) waves from an ultrasound source toward a volume of tissue that includes an extracellular matrix (ECM);generating, from nonlinear propagation of the HIFU waves, shock waves in the tissue to induce boiling in the volume of the tissue at a focus of the ultrasound source; andlysing cells in the volume of tissue, via the HIFU waves, while leaving the ECM at least substantially intact.2. The method of example 1 wherein the focus of the ultrasound source is positioned a depth within the tissue, and wherein generating shock waves in the tissue includes generating shock waves having a peak positive pressure at the focus of at least 50 MPa.3. The method of example 1 or example 2 wherein pulsing HIFU waves comprises pulsing HIFU waves such that each pulse has a duration of 0.1-100 ms.4. The method of any one of examples 1-3 wherein pulsing the HIFU waves further comprises pulsing the HIFU waves at a duty cycle of at most 5%.5. The method of example 1 wherein:the ultrasound source has a power of 250-5,000 W;pulsing the HIFU waves further comprises pulsing the HIFU waves at a duty cycle of at most 5%, wherein individual pulses have a pulse length of 1-10 ms;generating shock waves further comprises generating shock waves having a peak positive pressure of 50-150 MPa; andlysing cells in the volume of tissue comprises preserving vessels in the volume of tissue having diameters of less than 50 μm.6. The method of any one of examples 1-5 wherein lysing cells in the volume of tissue further comprises at least substantially decellularizing the tissue to create a scaffold for subsequent cell regrowth.7. The method of any one of examples 1-6 wherein the volume of tissue is part of an ex vivo organ, and wherein:lysing cells in the volume of tissue further comprises at least substantially decellularizing the volume of tissue; andthe method further comprises moving a focus of the ultrasound source across the organ while pulsing HIFU waves and lysing cells to create a decellularized scaffold of the ex vivo organ.8. The method of example 7, further comprising disposing cells on the decellularized scaffold to re-grow the organ.9. The method of any one of examples 1-6 wherein the volume of tissue is part of an in vivo tissue mass, and wherein:lysing cells in the volume of tissue further comprises at least substantially decellularizing the tissue; andmoving a focal region of the ultrasound source across the tissue mass while pulsing HIFU waves and lysing cells to create a decellularized scaffold.10. The method of any one of examples 1-9 wherein emulsifying the cells in the volume of tissue while leaving the ECM at least substantially intact further comprises forming a lesion in the tissue having a volume of at least 1 cm3.11. The method of any one of examples 1-10 wherein the shock waves in the tissue are distinct from shock waves resulting from cavitation.12. A method of treating tissue, the method comprising:applying, via an ultrasound source, high intensity focused ultrasound (HIFU) energy to a target site in tissue in accordance with a pulsing protocol, whereinthe individual pulses have a length of 0.1-100 ms,the HIFU energy generates shock waves proximate to the target site in the tissue to induce boiling of the tissue at the target site, andthe target site is proximate to a focal region of the ultrasound source; andforming a lesion in the tissue with the HIFU energy while preserving an extracellular matrix (ECM) in the lesion.13. The method of example 12 wherein forming the lesion in the tissue comprises at least decellularizing the tissue such that the ECM within the lesion is at least substantially free of cells.14. The method of example 12 or example 13 wherein applying HIFU energy to the target site comprises:pulsing HIFU waves at a duty cycle of at most 5%; andgenerating shock waves having a peak positive pressure of 50-150 MPa.15. A method of forming decellularized scaffolds, the method comprising:pulsing high intensity focused ultrasound (HIFU) energy from an ultrasound source toward a volume of tissue, wherein the volume of tissue includes an extracellular matrix (ECM); andlysing cells of the tissue with the HIFU energy to at least partially decellularize the volume of tissue while leaving the ECM at least substantially intact to form a decellularized scaffold for subsequent tissue growth.16. The method of example 15 wherein pulsing HIFU energy toward the volume of tissue comprises generating, from nonlinear propagation of HIFU waves, shock waves in the tissue to induce boiling in the tissue.17. The method of example 15 wherein pulsing HIFU energy toward the volume of tissue comprises applying cavitation histotripsy to form a lesion in the volume of tissue.18. The method of any one of examples 15-17, further comprising moving a focal region of the ultrasound source across portions of the tissue while pulsing the HIFU energy and lysing cells to form a lesion in the tissue having a volume of at least 1 cm3.19. The method of any one of examples 15-18 wherein the tissue is part of an ex vivo organ of a human body, and wherein emulsifying cells comprises decellularizing the ex vivo organ.20. The method of example 19, further comprising disposing cells on the decellularized scaffold to re-grow the organ.21. The method of any one of examples 15-20, further comprising perfusing vessels of the tissue with a decellularization detergent to further decellularize the tissue.22. The method of example 21 wherein perfusing vessels of the tissue with a decellularization agent occurs while the HIFU energy is applied to the tissue.23. The method of any one of examples 15-22 wherein the tissue is part of an in vivo organ of a human body, and wherein lysing cells comprises decellularizing at least a portion of the in vivo organ.24. The method of any one of examples 15-23 wherein pulsing HIFU energy further comprises applying HIFU energy to the volume of tissue in accordance with a pulsing protocol having a duty cycle of less than 5% and a pulse duration of at most 100 ms.25. A high intensity focused ultrasound (HIFU) system for forming decellularized scaffolds, the HIFU system comprising:an ultrasound source having a focal region and configured to deliver HIFU waves to a target site in tissue of a subject; anda controller having a function generator operably coupled to the ultrasound source, wherein—the controller comprises a non-transitory memory that includes a pulsing protocol for delivering HIFU energy with the ultrasound source, wherein the pulsing protocol has a pulse length of 0.01-100 ms, and a duty cycle of less than 10%,the controller is configured to cause the ultrasound source to pulse HIFU waves to lyse cells in a volume of the tissue of the subject while preserving an extracellular matrix (ECM) in the volume of the tissue exposed to the HIFU waves, andthe ECM defines a decellularized scaffold.26. The HIFU system of example 25 wherein the pulsing protocol of the controller has a peak positive pressure of at least 70 MPa.27. The HIFU system of example 25 or example 26 wherein the pulsing protocol of the controller has a pulse length of at most 10 ms.28. The HIFU system of any one of examples 25-27 wherein the pulsing protocol of the controller has duty cycle of at most 4%.29. The HIFU system of example 25 wherein the HIFU energy generates shock waves at the focal region in the tissue to induce boiling in the tissue.30. The HIFU system of example 25 wherein the HIFU energy generates cavitation bubbles in the tissue at the focal region.31. The HIFU system of example 25 wherein the pulsing protocol of the controller has a peak negative pressure at the focal region of −15 MPa or less. CONCLUSION From the foregoing, it will be appreciated that specific embodiments of the technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the disclosure. For example, the HIFU system100ofFIG.1can include additional devices and/or systems to facilitate selectively fragmenting tissue volumes. For example, the HIFU system100can include additional amplifiers, high-pass or other suitable filters, perfusion decellularization systems, and/or other suitable devices related to HIFU and decellularization of tissue masses. Certain aspects of the new technology described in the context of particular embodiments may be combined or eliminated in other embodiments. Additionally, while advantages associated with certain embodiments of the new technology have been described in the context of those embodiments, other embodiments may also exhibit such advantages, and not all embodiments need necessarily exhibit such advantages to fall within the scope of the technology. Accordingly, the disclosure and associated technology can encompass other embodiments not expressly shown or described herein. Thus, the disclosure is not limited except as by the appended claims.
44,769
11857814
Those skilled in the art will appreciate that elements in the Figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, dimensions may be exaggerated relative to other elements to help improve understanding of the invention and its embodiments. Furthermore, when the terms ‘first’, ‘second’, and the like are used herein, their use is intended for distinguishing between similar elements and not necessarily for describing a sequential or chronological order. Moreover, relative terms like ‘front’, ‘back’, ‘top’ and ‘bottom’, and the like in the Description and/or in the claims are not necessarily used for describing exclusive relative position. Those skilled in the art will therefore understand that such terms may be interchangeable with other terms, and that the embodiments described herein are capable of operating in other orientations than those explicitly illustrated or otherwise described. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT The following description is not intended to limit the scope of the invention in any way as they are exemplary in nature, serving to describe the best mode of the invention known the inventors as of the filing date hereof. Consequently, changes may be made in the arrangement and/or function of any of the elements described in the exemplary embodiments disclosed herein without departing from the spirit and scope of the invention. Referring now toFIG.1A, a front view of the system100of the invention applied to a subject individual, around the chest is shown. System102includes substantially rigid, less flexible backing material to maintain pattern form on application of colour or pigment onto the mammalian body using system100. As referred to herein, the systems100, et al., are used on the human body, however, the system(s)100, et al., are also used on other animals, e.g. fish, insects, reptilians, birds and individuals, etc. System100further includes upper overspray panel124and lower overspray panel128, with one or more intermediate overspray panels disposed/inserted therebetween. On system100,104thin nylon fish line links (of course other polymeric material is also used in the invention as well as natural, bio-compatible materials) are provided to prevent deformation of pattern elements on application, the links are under tension or pressure in a native state. One or more patterns are provided across the front and back panels of system100, as indicated by ornamental patterns114,118onFIG.1A. It is appreciated that the masks are single use disposable in one variant of the invention, and in another variant of the invention the masks are re-useable. In the context of the masks being re-useable, they are easily cleaned with solvents that are also biocompatible since the sprays used on the human body must be biocompatible so as not to irritate the skin. Referring now toFIG.1B, a top view of the mask of the system100ofFIG.1A, laid flat, is shown. Again, a plurality of aperture laden patterns114,116are distributed in series across the length of system100. Similarly, aperture laden patterns130,132,134,136have the same or different aperture designs thereon, and are symmetrically or asymmetrically distributed across the front, side and back panel portions of the system100. It is appreciated that for larger designs the overspray panels128,124, and intervening panels, e.g. masks, are made of latex, neoprene, spandex, rubber or rubberized material, or other suitable flexible body conforming material. In other embodiments, the mask is made of spandex or other stretchable, elastic material. In another embodiment, the mask is made of nylon stocking material. Within the system100are subsystems110,110′ of various stencils designs. These subsystems110,110′ differ in elasticity from the other portions of the masks of the system100, generally being more rigid, or less flexible while being able to at least partially conform to a portion of the human anatomy while not distorting the artistic design of the stencil and apertures thereof. To achieve this form of construction, the subsystems include fishnet material102, through which spray colouring can pass, fish line or thin thread material104,106through which colour, e.g. spray can pass, as well as other substantially rigid material108(a backing for example, which can be flat but is preferably formed so as to conform to the morphology of the body in that area so as to press against the skin at the painting edge) forming the stencil design102and are used to keep various elements of the stencil/mask in orientation to one another permitting the formation of the appropriate design on the skin. The elasticity of the combination of fishnet102, or thread104,106, and/or rigid material108is optimally selected to match that of the material of the mask. In the variant of subsystem110′, fishnet material102′ has large apertures therebetween so that there is no or very limited interference with the application of body spray paint to the skin is achieved, similarly a network of very thin threads104′,106′ are used to hold in various stencil element design elements. On or near the proximal end140of system100is located one or more re-openable interlocking fasteners which can be Velcro™ brand elements112,116,120which are spaced horizontally along the panel of system100to provide for adjustability of the system to fit more than size of human body portion. It is appreciated that the stencil is therefor infinitely adjustable around the torso of a human. Where centering of stencil forms on the check and on the back of the individual should be centered, then it is best that the stencil be comprised of two elements which interlock around the individual, so that the pattern can be centered front and back essentially regardless of the girth of the individual. This assumes that the overlapping interlocks provide for interlocking over a significant circumferential range. This also help ensure that the stencil will not be significantly circumferentially stretched/deformed, so that the stencil pattern is not significantly deformed. At the distal end142of the system is a mating Velcro fastener142to that of fastener140. The interlocking is preferably made through a Velcro interlock, but other mechanical interlocking means, such as buttons, clasps, clips etc. may be used. Further, adhesive and even magnetic interlocking means may be used to hold the mask in place against the skin. So a variety of fastener means may be used in the invention. Note that to accommodate differing gerths and size, in order to enter patterns on the front and rear of the wearer, the mask is made up of two parts which interconnect on the sides of the wearer, with sufficient interlocking positions to accommodate a wide range of sizes with a single mask. Referring now toFIGS.2A and2B, a three dimensional body painting system200is shown including a perspective view of two masks of the invention, which if both used on a subject individual, create an interesting alternating pattern on the individual when two different colors and/or applications, such as color and glitter (or other body adorning material) are used. The system200includes neck overspray panel204which, of course, includes an aperture for the placement of a human neck therein. The panel204is fastened around the neck with velcro element202. Similarly, on the left sleeve overspray panel208is provided Velcro fastener206. Chest overspray panel212is securely fastened into place around the chest of the user with velcro fastener210. Open spray areas214are provided as shown. It is appreciated that on system200, the other stencil sub-systems as shown inFIGS.1A and1Bare also provided. One or more pattern masks218are also provided on system200.216right sleeve overspray panel is also joined by Velcro fastener220. In the variant of system200, inFIG.2B, spray areas250are provided with various patterns being offset. This allows for an overlapping pattern effect and elements thereof to be provided. It is also possible to provide for an overlapping checkered pattern, by way of example. Referring now toFIG.3, a front view of the system300of the invention is shown. Body spray paint is shown being applied so that it defines a bikini bottom area312for painting, on a subject individual. System300of the invention is sized, dimensioned and constructed with elements illustrated in the figures above, and body art spray can302is used in a method of the invention The system300includes a bottom portion304, that includes a right thigh mask308and a left thigh mask306. For example, right thigh mask308is provided with a Velcro fastener to join two ends of the mask. Left thigh mask306is provided with a Velcro fastener to join two ends of the mask. Waist mask310is provided with a Velcro fastener to join two ends of the mask. An exemplary method, as applied to the bikini area (but of course applies to any other area of the body also) is as follows: a method for applying a decorative paint to the epidermis of a human, the method comprising the steps of: applying a stencil having a decorative pattern formed therein around an appendage of a subject individual (in this figure the upper thighs and waist), masking thereby a predetermined area of the epidermis, the stencil comprising an removable interlocking overlapping panel for enabling easy removal of the stencil, the stencil itself providing an opaque essentially non-absorbent layer which protects the underlying epidermal area from an applied body paint, defining further an epidermal painting area; covering the epidermal painting area and a portion of the non-absorbent layer of the stencil with a predetermined amount of an epidermal painting material to cause the epidermal painting material to contact the epidermis only via the decorative pattern masked by the stencil; allowing the epidermal painting material to dry, a portion of the epidermal painting material covering the portion of the epidermis that is coextensive with the decorative pattern masked by the stencil to form a decorative painting in the epidermis in the form of the decorative pattern; and removing the stencil from the subject individual, thereby leaving the painted decorative pattern. Referring now toFIG.4, an example of a sub-system400bridging element410that supports two masks402,406is shown. The masks402,406are disposed at defined locations with respect to one another while enabling color spraying thereunder. In this variant, pattern mask402is provided. A raised link410connects the pattern mask402to pattern mask406, and therebetween rests region of skin408which can be painted, e.g. otherwise it would be masked by the link that connects masks402,406. It is appreciated that one or more stencil designs herein utilize one or more or a network of raised (alone or in combination with non-raised bridging elements) bridging elements in a matrix to create very elaborate and detailed stencils on the human body in combinations heretofore unseen by the human eye. The height at which the raised link portion horizontal to the skin is placed varies upon the location at which the sub-system is used. It is appreciated that the height is variable, and can be from a millimetre upwards. Referring now toFIG.5, an example of sub-system500including another simple, nylon fishing line bridging element504is shown, which at least to a significant extent, allows for painting thereunder or which is sufficiently thin as not to effectively mask any significant portion of the skin thereunder, allowing painting thereunder. Sub-system500includes mask502which is connected to mask506by bridging element504. Mask502(and other masks) are provided with a thickness of material “t” which raised bridging element504sufficiently above the skin so that aerosol spray particulates can be deposited on the skin under bridging element504. Adhesive pads508,510are used to fix bridging element504to the respective masks502,506. As with sub-system400it is appreciated that one or more sub-systems500are used to create one or more stencil designs herein, and utilize one or more or a network of raised (alone or in combination with non-raised bridging elements) bridging elements in a matrix to create very elaborate and detailed stencils on the human body in combinations heretofore unseen by the human eye in a manner that reduces the need for skilled artisanship and decreases time. It is further appreciated that sub-systems400,500(alone or in combination with other features of the invention) are used alone or in combination. In other variants of the invention, the sub-systems are used in a vertically stacked manner to provide for shading or areas of differing paint particulate deposits on the skin creating an even more detailed stencil design. In other variants, the sub-systems are used in prepositioned locations on the other systems of the invention, e.g. system100, system200, system300, etc. One or more sub-systems are placed in series or parallel along systems100,200,300, in yet further variants of the invention. Referring toFIG.6, a kit600is shown, including body paint spray602, fixing spray604, one or more individual mask templates606,608for application anywhere on the skin, the3D masks610of the invention, and instructions for use including video demonstrations on DVD612, for example. One or more elements of the kit600are used, alone or in combination with one or more other elements to form the kit depending on the specific body region that the particular kit is being used for, e.g. torso, waist, bikini area, legs, alone or in combination. In an embodiment, one or more masks of the invention are made of latex. In other embodiments, the mask is made of spandex or other stretchable, elastic material. In another embodiment, the mask is made of nylon stocking material. The interlocking is preferably made through a Velcro interlock, but other mechanical interlocking means, such as buttons, clasps, clips etc. may be used. Further, adhesive and even magnetic means may be used to hold the mask in place against the skin. It is further appreciated that kit600can include other elements to help round out the image or character created by the body art of the human. It is appreciated that this kit600is particularly useful during festivals, e.g. Carnavale in Brazil, and in other countries, as well as, a myriad of other events and festivals. With that in mind, the kit further optionally includes body painting colour material, and brushes; make up, and supporting application brushes, costume apparel, hair styling elements, in which the hair styling elements are selected from the group consisting of hair spray, hair colour spray, and a head dress, and footwear. It should be also appreciated that the invention can be used in a process by which the stencil pattern is uploaded via the internet by the individual, and the system then manages the custom cutting (water or laser cutting for example) of the particular stencil pattern, along with other production, ancillary promotional, packaging, and mailing steps. The individual can also order any desired hair and body paint colors or glitter, to complete a desired order. A further embodiment of the invention is shown inFIGS.7A to9B. In this embodiment, in a comparable manner to that described inFIGS.2A-3, a full bikini pattern design is applied using the stencil arrangement20e. This arrangement includes a shoulder mask702, and a more extensive torso mask704. A tab706connects the shoulder and the torso mask using a removable attachment device, such as tape, a snap, or hook and loop devices such as Velcro strips. The thigh masks306,308are connected using an attachment device such a tab708as well. Referring now toFIGS.10A and10B, in another embodiment1000, wide elastic bands1002, typically from 20 mm to 100 mm in width, preferably 50 mm in width, and between 1 mm-2 mm in thickness, replace the more complicated forms of the preceeding embodiment. The elastic band material can be latex or rubber, or other elastic materials, but is preferably that used for the tightly woven waistband for underwear such as boxing shorts. Such a tightly woven band is essentially impermeable to brushed or sponged on applications such as body paint. The cut ends of the band material may be sewn, or a crimpable trim component (not shown) may be placed over and crimped on the end to give it a pleasing aesthetic appearance. Alternatively, the ends may be sealed by melting the material locally at the edge with a soldering iron. At least one upper band1004and at least two thigh bands1006are required to mask a bathing suit form. In this embodiment, the upper band1004, here disposed around the waist, includes a buckle1012(such as the one shown inFIG.12E), allowing removal without disturbing freshly laid paint. The elasticity of the upper band1004allows the thigh bands1006to be tucked under the band1004and thereby held in place in an adjustable manner, where the portion of the thigh band1006which extends above the upper band1004may be pulled to draw up the band around the thighs, thereby creating a defined masked portion of a bikini bottom. The thigh bands1002need not be bands but could be a truncated stocking portion where a tab on its side tucks under or otherwise attaches to the upper band1004. This is possible because removal does not disturb the freshly painted area as the thigh portion is removed downwardly away from the painted area. In addition, the upper band1004may be easily removed by undoing the buckle1012. In order to mask a bikini form, two further elastic bands1014,1016are required, which mask the bikini top area. These too are held around the body with a buckle1012. An upper chest band1014is adapted to be positioned above the breast. A lower chest band1016is adapted to be positioned below the breast. The bands1014,1016optionally have a fastening arrangement, in this case, two snaps or holes1020lined by eyelets through which a string1102(shown inFIG.11) can pass to draw the two bands together in the area between the breasts. Referring now toFIG.10D, where snaps are used, the two bands may be snapped together in order to draw them together as shown. Referring now toFIG.11, in another variant1100, the thigh bands1002are sewn, snapped, tided, stapled, interlocked, connected with hook and look connectors, glued, fused, or welded in place, such that other than the inherent flexibility of the bands1002, the thigh bands1002are not adjustable. Some adjustability is obtained via the elasticity of the band material and from the band1004as the buckle1012can be positioned on the band to be tighter or loser as the user prefers. Note that in this variant, a modified upper band1004′ is shown in which openings whose edges define a decorative pattern1120(in this case a star with two parallel lines) such as a logo or brand such as a sports team name, and which is held flat by a netting1130. The wider the band1004′ is, the larger the decorative pattern can be. As an alternate variant to the above embodiment described with reference toFIG.11, netting1132,1133can be applied between the upper and lower chest bands, as well as between the waist band and thigh bands, creating a netted underwear that is suitable as a stencil for applying a treatment to the netted area, such as bodypainting. In this variant, the buckle1012is not required, but still aids in removal particularly where the netting is elastic or loosely attached in the area of the buckle and where the netting is discontinuous in the area of the buckles1012on the upper breast stencil so that the two straps and therebetween spanned netting may be wrapped around the torso and buckled together. Preferrably, the netting is highly elastic and made highly undersized (much smaller than the anticipated user's body) so that when donned, it stretches into a snug skin-tight relationship to the body, thereby permitting the netting or lace patterns to act as decorative stencils which provide an attractive negative pattern on the skin when treated (preferably with an airbrush or a pump sprayer, or sponge). Note that using this variant, the resulting application will leave a negative image of the netting on the body, which in some cases may not be desired, but in other cases, may provide a nice patterned effect (e.g. where the netting is a decorative lace pattern). In addition, if removed when the application material is wet, this may disturb the freshly painted area, causing smearing. However, as mentioned, the buckles1012help minimize this problem. Note that this variant may be used as a separate stencil in addition to other stencils mentioned herein, in order to provide the netting or lace pattern to be applied to the body, creating in particular an attractive lingerie pattern. Note that the front view of this embodiment is essentially identical in appearance toFIGS.16B and16C, except that the netting or lace spans the upper chest bands and the waist and thigh bands as shown inFIG.11. Referring now toFIGS.12A,12B and12C, in another variant1200, the connection between the upper band1004and the thigh bands1002is effected with a double buckle1202(shown in more detail inFIG.12D) which allows the upper band1004to pass through, will providing cross bars1204around which ends1206of the thigh bands1002pass and then under the upper band1004and the double buckle loop1210. This allows a user to slide the double buckle1202along the upper band1004to a desired position, and to pull or release the ends1206of the thigh bands1002to adjust the fit of the thigh bands in order to obtain an more optimal and adjustable fitting of the stencil system of the invention. Referring now toFIG.13A, in another variant1300, bands1004or1016are not required should a one piece swim suit form be masked. Instead, the thigh bands1002are held up with a strap1302, preferably attached to a junction point1304of the thigh bands1002, or straps (not shown) which go over the shoulders and reattach to the junction point1304. Referring now toFIG.13B, another way to delimit a single piece bathing suit form is uses a netting which extends from the perspective view of yet another embodiment of the invention using netting. Referring now toFIG.13C, the netting is open on the rearward side along a length that extends from the lower back to the areas adjacent the buckle. This enables the opening of the buckle1012and the removal of the stencil much like a peeled banana, with the slit1340facilitating removal without disturbing any freshly painted areas. Because the lower straps1002can be removed by sliding them down the legs, this does not disturb freshly painted areas. Referring now toFIG.14A-14E, another embodiment1400of the invention uses a bulbus end portion1402of the lower thigh straps1404to prevent it from slipping below the waist strap1004. Adjustment of the thigh straps1404can be performed by pulling up on the bulbus ends1402, assuming that the waist strap1004is sufficiently tightened to prevent the thigh strap1404from slipping. Referring now toFIG.15A-15C, yet another embodiment1500of the invention uses a single strap1502for the lower bikini area. In this embodiment, a D-Ring1504enables the single strap1502to be passed therethrough and diverted around a thigh, and then passed again therethrough, to continue it's function as a waist strap, on each hip of the wearer. The buckle1506, as in all embodiments, can be located at the rear or the front, on the belly of the wearer, as is convenient to the wearer or the artist applying the treatment. Note that the front view and the rear view of this embodiment is identical to that shown inFIGS.16B and16C. Referring now toFIG.16A-16E, an alternate embodiment1600of the invention uses another arrangement of a single strap1602for the lower bikini area. A buckle1604, typically made of plastic, but metal may also be used, is preferably used in this embodiment, and the tying of the strap is essentially the same as the above embodiment (FIG.16Eis drawn to scale). Referring now toFIG.17A-17E, another alternate embodiment1700uses an alternate hip buckle1702of the invention, but does not use a single strap for the bikini area. Instead, the bikini area stencil is again made up of three separate stencils, namely, the waist stencil1704and the two thigh stencils1706. Advantageously, with separate straps1704and1706, the lower bikini stencil is easier to adjust because pulling on the lose ends1708adjusts a single thigh stencil and does not require adjustment of a single strap including both the waist strap and the thigh straps.FIG.16Eis also drawn to scale. It should be appreciated that the particular implementations shown and herein described are representative of the invention and its best mode and are not intended to limit the scope of the present invention in any way. Moreover, the system contemplates the use, sale and/or distribution of any goods, services or information having similar functionality described herein. As will be appreciated by skilled artisans, the present invention may be embodied as a system, a device, or a method. Moreover, the system contemplates the use, sale and/or distribution of any goods, services or information having similar functionality described herein. The specification and figures should be considered in an illustrative manner, rather than a restrictive one and all modifications described herein are intended to be included within the scope of the invention claimed. Accordingly, the scope of the invention should be determined by the appended claims (as they currently exist or as later amended or added, and their legal equivalents) rather than by merely the examples described above. Steps recited in any method or process claims, unless otherwise expressly stated, may be executed in any order and are not limited to the specific order presented in any claim. Further, the elements and/or components recited in apparatus claims may be assembled or otherwise functionally configured in a variety of permutations to produce substantially the same result as the present invention. Consequently, the invention should not be interpreted as being limited to the specific configuration recited in the claims. Benefits, other advantages and solutions mentioned herein are not to be construed as critical, required or essential features or components of any or all the claims. As used herein, the terms “comprises”, “comprising”, or variations thereof, are intended to refer to a non-exclusive listing of elements, such that any apparatus, process, method, article, or composition of the invention that comprises a list of elements, that does not include only those elements recited, but may also include other elements described in the instant specification. Unless otherwise explicitly stated, the use of the term “consisting” or “consisting of” or “consisting essentially of” is not intended to limit the scope of the invention to the enumerated elements named thereafter, unless otherwise indicated. Other combinations and/or modifications of the above-described elements, materials or structures used in the practice of the present invention may be varied or adapted by the skilled artisan to other designs without departing from the general principles of the invention. The patents and articles mentioned above are hereby incorporated by reference herein, unless otherwise noted, to the extent that the same are not inconsistent with this disclosure. Other characteristics and modes of execution of the invention are described in the appended claims. Further, the invention should be considered as comprising all possible combinations of every feature described in the instant specification, appended claims, and/or drawing figures which may be considered new, inventive and industrially applicable. Copyright may be owned by the Applicant(s) or their assignee and, with respect to express Licensees to third parties of the rights defined in one or more claims herein, no implied license is granted herein to use the invention as defined in the remaining claims. Further, vis-à-vis the public or third parties, no express or implied license is granted to prepare derivative works based on this patent specification, inclusive of the appendix hereto and any computer program comprised therein. Additional features and functionality of the invention are described in the claims appended hereto. Such claims are hereby incorporated in their entirety by reference thereto in this specification and should be considered as part of the application as filed. Multiple variations and modifications are possible in the embodiments of the invention, described here. Although certain illustrative embodiments of the invention have been shown and described here, a wide range of changes, modifications, and substitutions is contemplated in the foregoing disclosure. While the above description contains many specific details, these should not be construed as limitations on the scope of the invention, but rather exemplify one or another preferred embodiment thereof. In some instances, some features of the present invention may be employed without a corresponding use of the other features. Accordingly, it is appropriate that the foregoing description be construed broadly and understood as being illustrative only, the spirit and scope of the invention being limited only by the claims which ultimately issue in this application.
29,532
11857815
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS Same or similar components in different figures are provided with the same reference numerals. The representations in the figures are schematically presented. FIG.1shows a protective equipment100for protecting a person150from an emergency situation. The protective equipment100may have a securing device101, which may be portable (or wearable) by the person150, having a securing system102for protecting the person150from an emergency situation. Furthermore, the protective equipment100may have a sensor device103for detecting a dangerous situation prior to (or before) the emergency situation of the person150, wherein the sensor device103may be coupled with the securing system102directly or indirectly via a control unit105in such a way that upon detection of the dangerous situation of the person150, the sensor device103may automatically set the securing system102in a securing state for protecting the person150. In the exemplary embodiment inFIG.1, the person150may be located on a working area170. The working area170may represent, for example, an elevated platform, such as a container. A danger point104may be defined on the edges and/or borders of the work area170, since the person150may fall (or crash) over the edge upon passing over the danger point104. In this respect, a danger point104may be defined at a distance from the edge. In this way, it may be defined already before the fall of the person150that a dangerous situation may exist and the securing system may have to be activated. The person150inFIG.1may wear, for example, a rope-up protection, in particular a safety belt, as a securing device101. The securing system102may have a fall protection device (or fall arrester). The fall protection device may be equipped with a safety rope107, and may be configured to control the length of a rope length of the safety rope107between the portable securing device101and a securing point108for fixing the safety rope107. The fall protection device may be coupled with the sensor device103in such a way that the rope length may be controllable in dependency of the detection of the dangerous situation. In this case, the fall protection device may have, for example, a rope winch which may, for example, be operated electrically. The safety rope107may be wound onto the rope winch. If the sensor device103detects a dangerous situation, for example if a distance to a fall point (e.g. distance between reference point104′ and danger point104) is undershot or a fall speed is detected, the sensor device103may control the rope winch in the securing state. In doing so, the rope winch may, for example, abruptly fix the length of the safety rope107by fixing the rope winch. In addition, the fall protection device may have a rope brake, which may reduce an unwinding speed of the safety rope107from the rope winch. The sensor device103may have at least a position sensor, an acceleration sensor and/or a motion sensor. The position sensor may be, for example, a GPS sensor which may determine GPS data from a satellite160in order to determine an exact geographical and/or spatial position of the person150. If the person150is, for example, in a dangerous situation, for example at a predetermined reference point104′ in front of a fall edge104, the securing state of the securing system may be set from this (or on this basis). Sensor device103may also include a distance sensor. The distance sensor is configured to measure a distance to the predeterminable reference point104′ and/or the danger point104. The reference point104′ may define, for example, a position in space which may have a particular distance to a danger point104, such as for example the fall edge. As soon as the distance between the person150and the reference point falls below a predetermined target value (or setpoint), the securing system102, i.e. the fall protection device, may be, for example, automatically set in the securing state. The securing system102and/or the sensor device103may be integrated and fixed in the securing device101, so that the person150may be permanently carrying, with the securing device101, the securing system102and the sensor device103. In particular, the sensor device103may send the sensor data to the monitoring station110, for example by means of a transmission unit106. Herein, for example, a dangerous situation or an emergency situation may be detected by a supervising person, even if this person may not be on site with the person150. Thus, in other words, a remote diagnosis may become possible. FIG.2shows a protective jacket200as a securing device101. The securing system102and the sensor device103may be integrated in the protective jacket200. The securing system102may have, for example, an airbag device203. The airbag device203may be embodied to be inflatable in order to form a damping body in the inflated state, i.e. in the securing state. The airbag device203may be coupled with the sensor device103in such a way that the airbag device203may be inflatable in dependency of the detection of the dangerous situation. If, for example, a distance to an obstacle or a dangerous point is reduced or a fall speed is detected, the sensor device103may activate the airbag device203. Since the sensor device103may detect already a dangerous situation, which may be defined in terms of time or space before the emergency situation occurs, more time may thus be enabled for activating the airbag device203, so that better protection may be achieved with respect to a pure impact detection, in which the emergency situation may have already occurred when the airbag may be activated. The airbag device203may, for example, be controlled by controlling control elements and/or valves206. The protective jacket200may have, for example, a medical sensor201as a sensor device103. The medical sensor201may be configured to measure a medical condition, in particular the body temperature, the respiratory frequency of the person150and/or the heart rate. The medical sensor201may accordingly have, for example, a body thermometer, a respiratory measuring device, a pulse measuring device and/or a blood measuring device for measuring blood. For example, if the person150works in a cool working environment, there may be a risk of hypothermia (corresponds to the emergency situation). If the body temperature of the person150falls below a particular body temperature, the dangerous situation may occur. The medical sensor201accordingly sends control commands to the securing system102in order to set the securing state. In this respect, the securing system102may have, for example, a temperature-control unit204. The securing state may be set, for example, by configuring the temperature-control unit204as a body heater to increase the body heat of the person150. The sensor device103may also have an environmental sensor202that may be integrated in the protective jacket200for measuring environmental parameters, in particular the ambient temperature, the ambient wind, the ambient air pressure and/or the ambient humidity. By the measurement of the air pressure, for example, a change in height (fall height) may be determined and a securing state may be set, for example by tightening a safety rope107(seeFIG.1). Furthermore, an alarm device205may be arranged in the protective jacket200as a securing system102. The alarm device205may be configured to send an alarm signal that may be indicative of the dangerous situation to a monitoring station110(seeFIG.1). The alarm device205may be coupled with the sensor device103in such a way that the alarm device205may generate the alarm signal in dependency of the detection of the dangerous situation and, for example, the safety condition for protecting the person150may be settable. The alarm device205may generate, for example, an optical, acoustic or electrical warning signal. The monitoring station110may be located at a distance from the securing system102. The alarm signal may be sent, for example, wirelessly to monitoring station110. Supplementarily, it should be noted that “comprising” does not exclude other elements or steps and that the article “a” or “an” does not exclude a plurality. Furthermore, it is noted that features or steps, which are described with reference to one of the above embodiments, can also be used in combination with other features or steps of other examples described above. LIST OF REFERENCE NUMERALS 100protective equipment101securing device102securing system103sensor device104reference point/danger point105control unit106transmission unit107safety rope108securing point110monitoring station150person160GPS satellite170working area200protective jacket201medical sensor202environmental sensor203airbag device204temperature-control unit205alarm device206control element/valve
8,853
11857816
DETAILED DESCRIPTION A fall-prevention safety interlock system includes a harness interlock circuit and a safety interlock circuit. The harness interlock circuit includes a means to communicate with the safety interlock circuit to confirm that one or more safety conditions have been met. The safety interlock circuit is electrically coupled to a machine to enable and/or disable the machine based on whether the safety conditions have been met. One safety condition can include whether a fall-arrest clip at the end of a fall-arrest tether of the machine is attached to a safety attachment ring on a safety harness or safety garment worn by an individual. Another safety condition can include whether all of the buckles on the safety harness or garment are attached. FIG.1is a block diagram of a person100lifted by an example machine110to illustrate the environment in which the safety interlock system can be disposed. The example machine110is illustrated as a bucket truck or cherry picker. In other embodiments, the example machine110can be a scissor lift, an aerial lift, a boom lift, or another lift or machine. In another embodiment, the machine110can be a power-suspended platform for performing maintenance on and/or washing windows of a skyscraper. The machine110can be used in a warehouse, on a construction site, at a maintenance or repair site (e.g., while repairing telephone or electrical lines), and/or in other environments. The machine110includes an arm120that can raise and lower a bucket130to position the person100to perform a task. The arm120can be controlled via controls140on the machine110such as in the bucket120. The controls140can communicate with the machine110through a wired circuit that extends through the bucket130and arm120. When the person100is in the bucket130, the person100typically wears a safety harness150. A fall-arrest tether160is attached to the safety harness150and the bucket130to prevent the person100from falling while performing a task in the bucket130. The safety harness150includes a safety attachment ring152that can releasably receive a fall-arrest clip162at the end of the fall-arrest tether160, as illustrated inFIG.2. The attachment ring152can comprise a D-ring or another ring. The fall-arrest clip162can comprise a carabiner or another clip. In an alternative embodiment, the fall-arrest clip162can be attached to the safety harness150and the attachment ring152can be attached to the fall-arrest tether160. The safety harness150can be partially or fully integrated into a safety garment, such as a vest, a jacket, overalls, or another garment. The machine110and safety harness150include interlock circuits170,180, respectively, that can require the safety harness150to be attached to the fall-arrest tether160in order for the machine110to operate (e.g., for the controls140to work). In some embodiments, the safety harness150can include circuitry that can require that all buckles155of the safety harness are secured in order for the machine110to operate. The interlocks circuits170,180can be separate from or integrated with other interlock circuits in the machine110. For example, the machine110can include an interlock circuit that requires a physical or digital key and/or the depression of a foot pedal (e.g., a dead-man pedal) to operate the machine110. An example of a schematic safety interlock circuit30in the machine110is illustrated inFIG.3. The interlock circuit30includes a safety interlock chain300that includes multiple safety interlocks. For example, the safety interlock chain300includes a machine ignition key interlock310, a dead man foot pedal interlock320, and a fall-arrest interlock circuit330. All interlocks310,320,330need to be satisfied (e.g., in the closed state) in order for the machine110operate. In some embodiments, interlock310and/or320is/are optional. Safety interlock circuit30can be the same as interlock circuit170. FIG.4is a schematic diagram of a harness interlock circuit40, according to an embodiment, that can be incorporated and/or integrated into the safety harness150. The harness interlock circuit40includes a power circuit400and a transmit circuit410. The power circuit400includes a battery402and a plurality of optional mechanical switches404. The battery402can comprise a 9V battery or another battery, which can be replaced or recharged when depleted. The transmit circuit410includes an oscillator412and a harness coil414. The harness interlock circuit40can be the same as interlock circuit180. The mechanical switches404are electrically connected in series with each other. Each mechanical switch404is disposed in a corresponding buckle155of the safety harness150. The mechanical switches404are in the open state when the buckles155are disconnected and are in the closed state when the buckles155are connected or secured. The mechanical switches404can have a default state as the open state. When one or more of the mechanical switches404is/are in the open state, the power circuit400is open and power from the battery402is not provided to the oscillator412. When all of the mechanical switches404are in the closed state, the power circuit400is closed and power from the battery402is provided to the oscillator412. The optional mechanical switches404can be omitted from the harness interlock circuit40in some embodiments. When power is provided to the oscillator412, the oscillator412produces an oscillating signal that drives the harness coil414. The harness coil414includes one or more loops of wire that produces an oscillating electromagnetic signal when driven by the oscillator412. The loop(s) of wire in the harness coil414can be disposed on, in, and/or around the attachment D-ring152, for example as illustrated inFIG.5. InFIG.5, the harness coil414and attachment D-ring152are illustrated in an enlarged view while a smaller illustration of the safety harness150is included for reference. In some embodiments, a voltage sensor can be electrically coupled to the battery402to monitor the charge of the battery402. A battery-life indicator, such as a light (e.g., a light-emitting diode (LED)), can be placed on the safety harness150to indicate the energy state of the battery402. The battery-life indicator can be electrically coupled to the voltage sensor. In some embodiments, a back-up power supply can be available on the machine110(e.g., in the bucket130) in case the battery402becomes depleted while a worker is in the bucket130and elevated above ground level. FIG.6is a schematic diagram of an equipment safety interlock circuit60, according to an embodiment, that can be incorporated and/or integrated into the machine110. The equipment safety interlock circuit60includes a fall-arrest equipment monitoring coil600, an amplifier610, a relay controller620, and a safety interlock relay630. The equipment safety interlock circuit60can be the same as or can be included in interlock circuit170. For example, the safety interlock circuit60can be the same as fall-arrest interlock circuit330. The fall-arrest equipment monitoring coil600includes one or more loops of wire through which an oscillating voltage (e.g., an oscillating voltage greater than or equal to a predetermined value) is induced by the oscillating electromagnetic signal produced by the harness coil414when the fall-arrest equipment monitoring coil600is within range (e.g., within a predetermined distance) of the harness coil414. When the fall-arrest equipment monitoring coil600is out of range (e.g., greater than a predetermined distance) of the harness coil414, a voltage is not induced in the fall-arrest equipment monitoring coil600or the induced voltage is lower than a predetermined value. The loop(s) of wire in the fall-arrest equipment monitoring coil600can be disposed on, in, and/or around the fall-arrest clip162, for example as illustrated inFIG.7. The amplifier610has an input612that is electrically coupled to the fall-arrest equipment monitoring coil600and an output614that is electrically coupled to an input622of the relay controller620. The amplifier610can amplify the voltage signal induced in the fall-arrest equipment monitoring coil600and can produce an ON signal when the voltage across or the current in the fall-arrest equipment monitoring coil600is greater than a predetermined minimum value, which can indicate that the fall-arrest equipment monitoring coil600is within range of the harness coil414. In addition, the amplifier610can produce an OFF signal when the voltage across or the current in the fall-arrest equipment monitoring coil600is lower than the predetermined minimum value, which can indicate that the fall-arrest equipment monitoring coil600is out of range of the harness coil414. Since the range of the harness coil414is limited, the ON signal can indicate that the fall-arrest clip162is attached to the attachment ring152. In an alternative embodiment, the amplifier610can be replaced with a voltage sensor and/or a current sensor. The amplifier610can include a pre-comparator615, a filter616, and a post-comparator617. The input of the pre-comparator615can be electrically coupled to the fall-arrest equipment monitoring coil600. The output of the pre-comparator615can be electrically coupled to the input of the filter616. The output of the filter616can be electrically coupled to the input of the post-comparator617. The output of the post-comparator617can be electrically coupled to the input622of the relay controller620. Due to the potentially-small AC signal amplitude induced by the harness coil414when the fall-arrest equipment monitoring coil600is in range, the signal is amplified and rectified. The pre-comparator615produces a square wave signal output, much larger in amplitude (e.g., potentially on the order of ×1000) and in sync with the positive peaks of the AC signal induced in the fall-arrest equipment monitoring coil600. This larger signal is elevated above the noise floor, and can be “smoothed” out by the filter circuit616. The filter circuit616produces a more even amplitude signal, which will be entirely above a certain threshold. Post comparator617has a reference voltage below this threshold, and thus will produce a constant DC signal at a desired amplitude in order to drive relay controller620. The relay controller620is configured to generate a safety signal when the voltage across and/or current in the fall-arrest equipment monitoring coil600is greater than a predetermined minimum value. For example, the relay controller620can produce the safety signal in response to receiving the ON signal from the amplifier610. The ON signal can cause a switch625in the relay controller620to transition from a default open state to a closed state to produce the safety signal. In some embodiments, the switch625can drive a relay coil632to magnetically actuate a relay switch634in the safety interlock relay630. The safety interlock relay630has a default open state and a closed state. In the open state, the safety interlock relay630can prevent the machine110from receiving power (e.g., from a battery in the machine110) to operate, which can prevent the controls140from functioning. In the closed state, the machine110and/or controls140can receive power to operate or function. The safety interlock relay630can transition from the default open state to the closed state only in response to the safety signal from the relay controller620which can drive the relay coil632to close the relay switch634. The safety interlock relay630can include or can be electrically coupled to other safety interlocks in the safety interlock chain300such as the machine ignition key interlock310and/or the dead man foot pedal interlock320. For example, the safety interlock circuit60or the safety interlock relay630can be the same as fall-arrest interlock circuit330. Thus, harness interlock circuit40and equipment safety interlock circuit60can function together as a safety interlock system80, as illustrated inFIG.8. FIG.9is a schematic diagram of a harness interlock circuit90, according to an alternative embodiment, that can be incorporated and/or integrated into the safety harness150. The harness interlock circuit90includes a harness coil900, a plurality of vest control load resistors910, and a plurality of switches920. The harness interlock circuit90can be the same as interlock circuit180. FIG.10is a schematic diagram of an equipment safety interlock circuit1000, according to an alternative embodiment, that can be incorporated and/or integrated into the machine110. The equipment safety interlock circuit1000includes a fall-arrest equipment transceiver coil1001, an oscillator1010, a current sensor1020, a relay controller1030, and a safety interlock relay1040. The equipment safety interlock circuit90can be the same as or can be included in interlock circuit170. For example, the safety interlock circuit90or safety interlock relay1040can be the same as fall-arrest interlock circuit330. The oscillator1010is electrically coupled to a voltage source (e.g., Vcc) in the machine110, such as battery, to produce an oscillating signal to drive the fall-arrest equipment transceiver coil1001. The fall-arrest equipment transceiver coil1001includes one or more loops of wire that produces an oscillating electromagnetic signal when driven by the oscillator1010. The loop(s) of wire in the fall-arrest equipment transceiver coil1001can be disposed on, in, and/or around the fall-arrest clip162. The oscillating electromagnetic signal produced by the fall-arrest equipment transceiver coil1001can induce an oscillating voltage in the harness coil900when the harness coil900is within range (e.g., within a predetermined distance) of the fall-arrest equipment transceiver coil1001. The inductive coupling of the fall-arrest equipment transceiver coil1001to the harness coil900causes the current through the fall-arrest equipment transceiver coil1001to increase compared to the current through the fall-arrest equipment transceiver coil1001when the harness coil900is out of range (e.g., greater than a predetermined distance) from the fall-arrest equipment transceiver coil1001. In some embodiments, the oscillator1010is a variable or tunable oscillator that can tune the frequency of the oscillating electromagnetic signal that drives the fall-arrest equipment transceiver coil1001to match the resonance frequency of the circuit90and/or fall-arrest equipment transceiver coil1001to optimize power transfer to the harness coil900. The current flowing through the fall-arrest equipment transceiver coil1001can be monitored by the current sensor1020, which has an input electrically coupled to the fall-arrest coil and/or to the oscillator1010. The current sensor1020can monitor a time-averaged current through fall-arrest equipment transceiver coil1001to mask momentary spikes or drops in current, along with accounting for the periodic change in current due to the signal being AC in nature. Since the range of the fall-arrest equipment transceiver coil1001is limited, the increase in current sensed by the current sensor1020can indicate that the fall-arrest equipment transceiver coil1001is within range of the harness coil900, and thus that the fall-arrest clip162is attached to the attachment ring152. The output of the current sensor1020is electrically coupled to the relay controller1030. The relay controller1030is configured to generate a safety signal only when the current (e.g., time-averaged current or other current statistic) increases by a predetermined minimum value. The predetermined minimum value can be an absolute number (e.g., an increase of 100 mA) or it can be relative or proportional to the current (e.g., time-averaged current or other current statistic) sensed by the current sensor1020when the harness coil900is out of range. For example, the predetermined minimum value can be a percentage increase (e.g., 10% to 30%) of the current (e.g., time-averaged current or other current statistic) sensed by the current sensor1020when the harness coil900is out of range. The relay controller1030can be the same as or different than relay controller620. The safety interlock relay1040has a default open state and a closed state. In the open state, the safety interlock relay1040can prevent the machine110from receiving power (e.g., from a battery in the machine110) to operate, which can prevent the controls140from functioning. In the closed state, the machine110and/or controls140can receive power to operate or function. The safety interlock relay1040can transition from the default open state to the closed state only in response to the safety signal from the relay controller1030which can drive a relay coil1042to close a relay switch1044. The safety interlock relay1040can include or can be electrically coupled to other safety interlocks in the safety interlock chain300such as the machine ignition key interlock310and/or the dead man foot pedal interlock320. For example, the safety interlock relay1040can be the same as fall-arrest interlock circuit330. In addition, the equipment safety interlock circuit1000or the safety interlock relay1040can be the same as or different than safety interlock relay630. The harness coil900includes one or more loops of wire through which an oscillating voltage is induced by the oscillating electromagnetic signal produced by the fall-arrest equipment transceiver coil1001when the harness coil900is within range (e.g., within a predetermined distance) of the fall-arrest equipment transceiver coil1001(or vice versa). When the harness coil900is out of range (e.g., greater than a predetermined distance) of the fall-arrest equipment transceiver coil1001, an oscillating voltage is not induced in the harness coil900or the induced voltage is lower than a predetermined value. The loop(s) of wire in the harness coil900can be disposed on, in, and/or around the attachment ring152. The harness coil900can be the same as harness coil414. The vest control load resistors910and respective switches920can be used to vary the load resistance across the harness coil900and on the induced voltage across the harness coil900. Each load resistor910can have a unique resistance. The vest control resistors are electrically coupled in parallel with each other. When a switch920is open, the corresponding load resistor910is not coupled to the harness coil900. When a switch920is closed, the corresponding load resistor910is coupled to the harness coil900. Varying the load resistance on the induced voltage across the harness coil900can cause a corresponding change in the current flowing through the fall-arrest equipment transceiver coil1001due to reflected impedance, and this current change can be sensed by the current sensor1020. The switches920can be controlled and/or used in any fashion that may be of use to the user. For example, one way may be a simple push button used to alert others in the work environment that the user is in need of assistance. It is important to note that other interlocks in this configuration may still be required to be implemented. For example, the mechanical switches404may be combined with this embodiment in series with one of the depicted loads910. Thus, the load resistors910can be used to send commands and/or communicate between the harness interlock circuit90and the equipment safety interlock circuit1000. Thus, harness interlock circuit90and equipment safety interlock circuit1000can function together as a safety interlock system1100, as illustrated inFIG.11. FIG.12is a schematic diagram of a harness interlock circuit1200, according to another embodiment, that can be incorporated and/or integrated into the safety harness150. The harness interlock circuit1200includes a light-sensor ring1201, an RF-enabled microcontroller1210, an antenna1220, a battery1230, and optional mechanical switches1240. The harness interlock circuit1200can be the same as or can be included in interlock circuit180. FIG.13is a schematic diagram of an equipment safety interlock circuit1300, according to another embodiment, that can be incorporated and/or integrated into the machine110. The equipment safety interlock circuit1300includes an antenna1301, an RF-enabled microcontroller1310, a relay controller1320, and a safety interlock relay1330. The equipment safety interlock circuit1300can be the same as or can be included in interlock circuit170. For example, the safety interlock circuit1300can be the same as fall-arrest interlock circuit330. The light-sensor ring1201includes a plurality of light emitters1401and a plurality of light sensors1402, for example as illustrated inFIG.14. The light sensors1402are configured to receive light1410emitted by the light emitters1401which may be limited to a predetermined wavelength or wavelength range. For example, the light emitters1401can emit infrared light, ultraviolet light, and/or other light. The light emitters1401preferably do not emit light in the visible spectrum. The light emitters1401can comprise LEDs, lasers, and/or other light sources. Alternatively, the light emitters1401can comprise optical fibers that emit light that passes therethrough from an external light source, which can be an LED, laser, and/or another light source. The light emitters1401can include optics and/or can produce collimated light that can be directed to only one or a predetermined number of light sensors1402. The light-sensor ring1201has a first state when each and every light sensor1402senses the light1410emitted by one or more of the light emitters1401. The light-sensor ring1201is in the first state inFIG.14. The light-sensor ring1201has a second state when at least one light sensor1402does not sense the light1410or senses less light (e.g., below a predetermined magnitude), emitted by one or more of the light emitters1401, for example as illustrated inFIG.15where an object1500blocks light sensors1402A,1402B from receiving the light1410. Object1500can be a portion of the fall-arrest clip162or an object attached to the fall-arrest clip162. The light sensors1402A,1402B have a higher resistance when the light1410is blocked by object1500compared to when the light1410is not blocked by object1500. When the light-sensor ring1201is in the first state, the light-sensor ring1201produces an output signal having a relatively low voltage. When the light-sensor ring1201is in the second state, the light-sensor ring1201produces an output signal having a relatively high voltage. The voltage of the output signal from the light-sensor ring1201is lower when the light-sensor ring1201is in the first state compared to when the light-sensor ring1201is in the second state. Conversely, the voltage of the output signal from the light-sensor ring1201is higher when the light-sensor ring1201is in the second state compared to when the light-sensor ring1201is in the first state. The low and high voltage output signals can correspond to a digital on/off signal. In other embodiments, the light-sensor ring1201can have another hollow shape such as a hollow square or cube, a hollow rectangle or rectangular prism, or another hollow body. Returning toFIG.12, the microcontroller1210has a first input1212that is electrically coupled to the output1202of the light-sensor ring1201. The microcontroller1210is configured (e.g., with program instructions stored in the memory of the microcontroller and/or with circuitry) to produce a harness status signal to transmit via the antenna1220which can be received by antenna1301. The harness status signal can have a first value when the light-sensor ring1201is in the first state and a second value when the light-sensor ring1201is in the second state. The microcontroller1210has an optional second input1214that is electrically coupled to a plurality of optional mechanical switches1240, which can be the same as mechanical switches404. The mechanical switches1240are electrically connected in series with each other. Each mechanical switch1240can be disposed in a corresponding buckle155of the safety harness150. The mechanical switches1240are in the open state when the buckles155are disconnected and are in the closed state when the buckles155are connected or secured. The mechanical switches1240can have a default state as the open state. When one or more of the mechanical switches1240is/are in the open state, the second input1214has a high voltage. When all of the mechanical switches1240are in the closed state, the second input1214has a low voltage. The microcontroller1210can be configured (e.g., with program instructions stored in the memory of the microcontroller and/or with circuitry) to produce a harness status signal to transmit via the antenna1220based on the voltages at the first and second inputs1212,1214. For example, the harness status signal can have the first value when the light-sensor ring1201is in the first state and all of the mechanical switches1240are in the closed state. The harness status signal can have the second value when the light-sensor ring1201is in the second state and/or at least one of the mechanical switches1240is in the open state. The optional mechanical switches404can be omitted from the harness interlock circuit1200in some embodiments. Additional inputs to microcontroller1210can also be used to transmit other useful information to machine110, such as an emergency call to get help and/or a battery-charge status. The light-sensor ring1201and the microcontroller1210are powered by the battery1230. The battery1230can comprise a 9V battery or another battery, which can be replaced or recharged when depleted. In some embodiments, a voltage sensor can be electrically coupled to the battery1230to monitor the charge of the battery1230. A battery-life indicator, such as a light (e.g., an LED), can be placed on the safety harness150to indicate the energy state of the battery1230. The battery-life indicator can be electrically coupled to the voltage sensor. In some embodiments, a back-up power supply can be available on the machine110(e.g., in the bucket130) in case the battery1230becomes depleted while a worker is in the bucket130and in an elevated state. The microcontroller1310in the equipment safety interlock circuit1300can generate an output signal based on the value of the harness status signal received via the antenna1301. When the harness status signal has the first value, the output signal of the microcontroller1310drives the relay controller1320. The relay controller1320is configured to generate a safety signal in response to the output signal from the microcontroller1320. For example, the output signal from the microcontroller1320can cause a switch1325in the relay controller1320to transition from a default open state to a closed state to produce the safety signal. In some embodiments, the switch1325can drive a relay coil. The relay controller1320and switch1325can be the same as relay controller620and switch625, respectively. The safety interlock relay1330has a default open state and a closed state. In the open state, the safety interlock relay1330can prevent the machine110from receiving power (e.g., from a battery in the machine110) to operate, which can prevent the controls140from functioning. In the closed state, the machine110and/or controls140can receive power to operate or function. The safety interlock relay1330can transition from the default open state to the closed state only in response to the safety signal from the relay controller1320which can drive a relay coil1332to close a relay switch1334. The safety interlock relay1330can include or can be electrically coupled to other safety interlocks in the safety interlock chain300such as the machine ignition key interlock310and/or the dead man foot pedal interlock320. For example, the equipment safety interlock circuit1300or the safety interlock relay1330can be the same as fall-arrest interlock circuit330. In addition, the safety interlock relay1330can be the same as safety interlock relay630,1040. Thus, harness interlock circuit1200and equipment safety interlock circuit1300can function together as a safety interlock system1600, as illustrated inFIG.16. FIG.17is a perspective view of a harness coil frame1700according to an embodiment. The harness coil frame1700comprises a base1701in which a slot1710is defined. The slot1710is configured to match or conform to the profile of the attachment ring152such that the attachment ring152can slide into and out of the slot1710. Mounting holes1720are defined on the housing1701near the opening of the slot1710. The mounting holes1720can receive a threaded fastener (e.g., a screw or bolt) to secure the attachment ring152in the slot1710. A spool housing1725is attached to the base1701. The spool housing1725defines a shape and volume to receive a coil insert1800(FIG.18), such as through an interference or friction fit. For example, the spool housing1725can be round or circular with a planar portion1722to align the spool housing1725with the coil insert1800. The spool housing1725has a hollow central region1740that can conform to the hollow region of the attachment ring152. In some embodiments, the coil insert1800can be attached to the spool housing1725with an adhesive. A channel or notch1730can be defined in the spool housing1725to allow one or more wires to electrically couple a coil in the coil insert1800(e.g., wire1820) to other portions of the harness interlock circuit. FIG.18is a perspective view of a coil insert1800according to an embodiment. The coil insert1800defines a coil channel1810on the perimeter of the coil insert1800along which a wire1820can be wrapped to form the harness coil (e.g., harness coil414,900). The coil insert1800and spool housing1725have complementary shapes such that they can mate together. For example the coil insert1800can be round or circular with a planar portion1830to align the spool housing coil insert1800with the spool housing1725(e.g., to align with corresponding planar portion1722). In addition, coil insert1800has a hollow central region1840that can conform to the hollow region of the attachment ring152and hollow central region1740. FIG.19is a perspective view of an example attachment ring1900, which can be the same as attachment ring152. The attachment ring1900is configured to be attached to a safety harness, such as safety harness150. The harness coil frame1700and the coil insert1800are configured and arranged to be mounted on the attachment ring1900and/or attachment ring152. FIG.20is a perspective view of a buckle switch assembly2000, in a disengaged state, that includes a mechanical switch. The buckle switch assembly2000includes a limit-switch assembly2010mounted on a male buckle portion2001and a plug assembly2020mounted on a female buckle portion2002. The plug assembly2020includes a protrusion block2022that is configured to mate with a hollow region2012define in the limit-switch assembly2010when the male and female buckle portions2001,2002are engaged. A limit switch2030is disposed in the hollow region2012. When the protrusion block2022is inserted into the hollow region2012, the protrusion block2022presses on the limit switch2030to change the state of the switch. Thus, the limit switch2030has a default first or open state, such as when the protrusion block2022is not inserted into the hollow region2012to press on the limit switch2030, and a second state when the protrusion block2022presses on the limit switch2030. The first and second states of the limit switch2030are configured to correspond to the disengaged and engaged states, respectively, of the male and female buckle portions2001,2002. The limit switch2030can be replaced with another switch, an optical sensor, a magnetic sensor (e.g., a Hall effect sensor), or another sensor or switch. The limit switch2030can be the same as mechanical switch155,404,1240. FIG.21is a top view of the buckle switch assembly2000in an engaged state. The invention should not be considered limited to the particular embodiments described above. Various modifications, equivalent processes, as well as numerous structures to which the invention may be applicable, will be readily apparent to those skilled in the art to which the invention is directed upon review of this disclosure. The above-described embodiments may be implemented in numerous ways. One or more aspects and embodiments involving the performance of processes or methods may utilize program instructions executable by a device (e.g., a computer, a processor, or other device) to perform, or control performance of, the processes or methods. In this respect, various inventive concepts may be embodied as a non-transitory computer readable storage medium (or multiple non-transitory computer readable storage media) (e.g., a computer memory of any suitable type including transitory or non-transitory digital storage units, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other tangible computer storage medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement one or more of the various embodiments described above. When implemented in software (e.g., as an application or app), the software code may be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. Further, it should be appreciated that a computer may be embodied in any of a number of forms, such as a rack-mounted computer, a desktop computer, a laptop computer, or a tablet computer, as non-limiting examples. Additionally, a computer may be embedded in a device not generally regarded as a computer but with suitable processing capabilities, including a Personal Digital Assistant (PDA), a smartphone or any other suitable portable or fixed electronic device. Also, a computer may have one or more communication devices, which may be used to interconnect the computer to one or more other devices and/or systems, such as, for example, one or more networks in any suitable form, including a local area network or a wide area network, such as an enterprise network, and intelligent network (IN) or the Internet. Such networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks or wired networks. Also, a computer may have one or more input devices and/or one or more output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that may be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that may be used for a user interface include keyboards, and pointing devices, such as mice, touch pads, and digitizing tablets. As another example, a computer may receive input information through speech recognition or in other audible formats. The non-transitory computer readable medium or media may be transportable, such that the program or programs stored thereon may be loaded onto one or more different computers or other processors to implement various one or more of the aspects described above. In some embodiments, computer readable media may be non-transitory media. The terms “program,” “app,” and “software” are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that may be employed to program a computer or other processor to implement various aspects as described above. Additionally, it should be appreciated that, according to one aspect, one or more computer programs that when executed perform methods of this application need not reside on a single computer or processor, but may be distributed in a modular fashion among a number of different computers or processors to implement various aspects of this application. Computer-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that performs particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or distributed as desired in various embodiments. Also, data structures may be stored in computer-readable media in any suitable form. For simplicity of illustration, data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a computer-readable medium that convey relationship between the fields. However, any suitable mechanism may be used to establish a relationship between information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationship between data elements. Thus, the disclosure and claims include new and novel improvements to existing methods and technologies, which were not previously known nor implemented to achieve the useful results described above. Users of the method and system will reap tangible benefits from the functions now made possible on account of the specific modifications described herein causing the effects in the system and its outputs to its users. It is expected that significantly improved operations can be achieved upon implementation of the claimed invention, using the technical components recited herein. Also, as described, some aspects may be embodied as one or more methods. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
38,048
11857817
DETAILED DESCRIPTION Exemplary embodiments of the present disclosure are directed to inert gas nozzles that suppress the sound from the nozzles to acceptable levels without the high pressure drop in the nozzle as found in prior art and related art systems. In the exemplary embodiments, the sound is reduced to acceptable levels by using only a minimal amount of sound dampening material in the flow path of the nozzle and by strategically disposing the nozzle relative to a pressure reducing device disposed upstream of the nozzle. For example, in some exemplary embodiments, the sound power level from the nozzle is no greater than 125 dB for a frequency range from 500 to 10,000 Hz for a coverage area up to 36 fl.×36 ft., and more preferably up to 32 n.×32 ft. In some exemplary embodiments, the pressure reducing device is mounted remotely from the main nozzle. In other embodiment, the pressure reducing device is mounted at the inlet of the nozzle. Generally, when the fire suppression system is activated, the inert gas pressure in the piping upstream of the pressure reducing device, such as, e.g., an orifice, can be as high as 2,000 psi. Depending on the configuration of the enclosure being protected, the pressure reducing device reduces the pressure to achieve the required inert gas flow for the enclosure. Of course, the nozzle also introduces a pressure drop that must be accounted for. If the pressure drop in the nozzle is too high, the inert gas flow will be unable to meet design criteria for displacing the oxygen in the enclosure. In exemplary embodiments of the disclosure, the disclosed low pressure drop nozzle has a pressure drop that is no more than 80 psi higher than the enclosure gage pressure. It is believed that there is no related art fire suppression nozzle that has such a low pressure drop (preferably no more than 80 psi higher than the enclosure gage pressure), low sound generation (preferably less than 125 dB and more preferably less than 108.6 dB) and high inert gas coverage area distribution (preferably up to 36 ft.×36 ft., and more preferably up to 32 ft.×32 ft.). As shown inFIG.1, a nozzle assembly100includes a low pressure drop acoustic suppressor nozzle101and a pressure reducing device. The pressure reduction device can be, e.g., orifice plate120. The nozzle assembly100is mounted in an enclosure50to protect data storage equipment52. The nozzle assembly100is connected to an inert gas fire suppression system via piping54. The configuration and operation of the fire suppression system is known in the art and thus, for brevity, will not be discussed farther. The orifice plate120receives high pressure gas from a fire suppression system (not shown) and the downstream pressure in the piping connected to the nozzle101is reduced via the orifice opening122. When mounted remotely from the nozzle101, the orifice plate120is preferably mounted in-line with the piping54using appropriate fitting and hardware. For example, the orifice plate120can be disposed in the piping, e.g., welding, soldering, or attached to the piping using fittings or other appropriate means. The orifice opening122is sized based on the diameter of the piping54and the required flow in the system based on the application. Preferably, the orifice opening122is 5% to 70% of the diameter of the piping54. As seen inFIG.1, the orifice plate120is disposed a distance X from the inlet102from the main nozzle101. The distance X is the length of the piping from the inlet102and the orifice120, i.e., distance X is the distance the gas travels in the piping. In preferred embodiments of the present disclosure, the orifice plate120is disposed remotely from the nozzle101. However, in other embodiments, the orifice plate120can be mounted directly at the inlet102. In some embodiments, the distance X can be up to 6 feet depending on whether the configuration of fire system in the enclosure50. Preferably, the distance X is in a range from 30 to 50 inches and more preferably between 35 to 45 inches. In some embodiments, the distance X is 41 inches. In some exemplary embodiments, the distance X is in a range of 0 to 12 inches from the inlet102and more preferably 3 to 9 inches. In some embodiments, the distance X is 6 inches. Preferably, the orifice plate120is mounted such that there are no bends in the piping54from the orifice plate120to the inlet102, e.g., the orifice plate can be mounted in the vertical section of piping above the nozzle101. As seen inFIG.2, nozzle101includes a fitting104configured to attach to the piping from the orifice plate120. For example, fitting104can include male pipe threads that screw into a female coupling on the piping54. When attaching to piping54, appropriate adapters can be used to transition from the piping54and the fitting104. Nozzle101includes a first set of secondary outlets106that includes a plurality of radially facing apertures110. The first set of secondary outlets106is positioned between an inner annular disc116and a first outer annular disc114. Nozzle101also includes a second set of secondary outlets108that includes a plurality of radially facing apertures112. The second set of secondary outlets108is positioned between the inner annular disc116and a second outer annular disc118. Broadly, the gas received through inlet102is divided internally, as described more fully below, and exits through the first and second sets of secondary outlets106and108between sound absorbing annular discs114,116, and118. With reference toFIG.3, nozzle101includes a longitudinally extending inner tube126having an inlet102and defining an axially extending passageway128. Preferably, when the orifice plate120is mounted at the nozzle101, it is mounted at the inlet102of the passageway128(see orifice plate120with dotted outline). Preferably the inner tube126is a cylindrical tube or pipe, but tube126can have other shapes. Preferably, diameter d2(seeFIG.4) of the inlet102is in a range of 1.25 to 1.75 inches, and more preferably 1.5 inches. The thickness of the inner tube126is in a range of 0.1 to 0.3 inches and most preferably 0.2 inches. The inner tube126is sized and configured to contain the supersonic gas flow moving through the orifice122and into passageway128. Preferably, the inner tube126is composed of a metal such as aluminum, bronze, stainless steel or some other metal or material appropriate for the rated temperature of the application. Inner tube126includes a set of primary outlets130that includes a plurality of radially facing primary apertures132. In other words, the radially facing primary apertures132extend transversely through the sidewall of the inner tube126. In general, smaller diameter and larger number of apertures provide better sound dissipating characteristics. Preferably, the apertures132of the primary outlets130are arranged in six rows with thirty apertures132in each row. Each of the apertures132in the respective row can be on a same plane perpendicular to a longitudinal axis of the inner tube126. The rows can be parallel to each other. Preferably, each row is offset from its adjacent row. In some embodiments, the offset is 6 degrees. However, in some embodiments, there is no offset. i.e., the apertures132are in-line as shown inFIG.3. Preferably, each aperture132is in a range of approximately 1/16 inch to ¼ inch in diameter and more preferably ⅛ inch in diameter. In some embodiments, all the apertures132are the same diameter. In some embodiments, the apertures132can have different diameters. However, the diameter, number, offset and arrangement of the apertures132of the primary outlets130are not limiting and the inventive nozzle100can include a set of primary outlets130having other diameter, number, offset and arrangement configurations. For example,FIG.5shows a primary outlets130configuration where five rows of apertures132are used instead of six. In other embodiments, the apertures132are not arranged in parallel rows can be arranged using other patterns or even randomly arranged. In some embodiments, the set of primary outlets130have a combined flow area that is greater than a flow area of the orifice122. The combined flow area of the primary outlets130is determined based on the quantity of gas flow needed for a particular application. Preferably, the set of primary outlets130have a combined flow area in a range of approximately 7 to 11 in2, and more preferably approximately 8.84 in2. A plug138encloses the inner tube126to create an inner chamber corresponding to passageway128. In some embodiments, the plug138can be secured in the inner tube with suitable threads, by welding, or with a press fit, for example. In some embodiments, the inner tube126is manufactured such that the end of the passageway128is already sealed and a plug138is not needed. For example, the tube126can be formed by starting with a cylindrical blank and drilling the passageway128to the correct depth, such that plug138is not needed. The inner tube126includes a flange124that is attached to the first outer annular disc114an appropriate attachment means such as, snap rings, retaining rings or some other fastening means. For example, as seen inFIG.3, the flange124is attached to a support plate154of the first outer annular disc114by a plurality of fasteners152. In some embodiments, a sound absorbing body136(seeFIG.5) is disposed in the passageway128that reduces the interaction between the inert gas and the nozzle101and reduces the sound caused by vibration of the nozzle101. In addition, where the sound absorbing body136is used, the set of primary outlets130can be located above the sound absorbing body136to help balance the amount of gas flowing through the primary apertures132and create a uniform velocity of the inert gas. The sound absorbing body136can be comprised of any suitable sound absorbing material such as, e.g., high temperature, high-density rigid fiberglass insulation. An example of suitable fiberglass insulation is available from McMaster-Carr and identified as part no. 9351K1. Of course, other sound absorbing materials, such as mineral wool or some other appropriate sound absorbing material can be used. However, in other embodiments, as shown inFIGS.3and4, the sound absorbing body136is not needed. Inner tube126is surrounded by an outer tube134defining an annular chamber135that surrounds the primary outlets132. Preferably the outer tube134is a cylindrical tube or pipe, but outer tube134can have other shapes. The outer tube134includes first and second sets of secondary outlets106and108, respectively. Preferably, the inner diameter d3(seeFIG.4) of the outer tube134is in a range of 3.0 to 5.0 inches and more preferably 3.81 inches. Preferably the thickness of the inner tube134is in a range of 0.05 to 0.4 inches and more preferably 0.345 inches. The outer tube134can be composed of a metal such as aluminum, bronze, stainless steel or some other metal or material appropriate for the rated temperature of the application. In some embodiments, the apertures110,112of the secondary outlets106,108, respectively, are arranged in four rows with thirty-six apertures110,112in each row, respectively. Each of the apertures110,112in the respective row can be on a same plane perpendicular to a longitudinal axis of the outer tube134. The rows can be parallel to each other. Preferably, each row is offset from its adjacent row. In some embodiments, the offset is 5 degrees. However, in other embodiments, the respective apertures110,112are in-line with each other. Preferably, each aperture110,112is in a range of approximately ⅛ inch to ½ inch in diameter and more preferably ¼ inch in diameter. In some embodiments, all the apertures110,112are the same diameter, respectively with each set of outlets106,108or even between outlet sets106,108. In some embodiments, the apertures110,112can have different diameters, respectively with each set of outlets106,108and/or between outlet sets106,108. However, the diameter, number and arrangement of the apertures110,112of the secondary outlets106,108, respectively, are not limiting and the inventive nozzle100can include a set of secondary outlets106,108having other diameter, number, offset and arrangement configurations. For example, in other embodiments, the apertures110,112are not arranged in parallel rows and the apertures110,112can be arranged using other patterns or even randomly arranged. In addition, in some embodiments, geometries other than holes can be used such as slots so long as the combined flow area of the secondary outlets106,108is appropriate for the application. In some embodiments, the first and second sets of secondary outlets106and108have a combined flow area that is greater than the combined flow area of the primary outlet130. Preferably, the first and second sets of secondary outlets106,108have a combined flow area in a range of approximately 45 to 68 in2, and more preferably approximately 56.55 in2. In some embodiments, the primary outlets130are disposed on the sidewall of the inner tube126such that the flow exits between the secondary outlets106,108. Preferably, the flow exits equidistant between the secondary outlets106,108. In some embodiments, the flow path from the primary outlets130is split into two paths each directed to the respective secondary outlets106,108. In some embodiments, more than two secondary outlets are provided and the flow from the primary outlet is split into more than two paths. Preferably, a sound absorbing device is disposed in the annular chamber135. In some embodiments, as shown inFIG.3, the sound absorbing device includes baffle140and sound absorbing inserts146and148positioned at the upper and lower ends of the chamber135. The baffle140is disposed inside the annular chamber135in the flow path of the inert gas. Preferably, the baffle140is cylindrical in shape and an outer surface of the baffle140is disposed between the sidewall of the inner tube126and the sidewall of outer tube134. In some embodiments, the baffle140is disposed against the sidewall of the outer tube134. Of course, the shape of the baffle is not limiting and other shapes can be used so long as the flow is not adversely restricted. The baffle140surrounds the radially facing primary outlets132and covers the inlets of the first and second sets of secondary outlets106and108. Preferably, the thickness of the baffle140is in a range of ⅛ inch to ½ inch and, more preferably ¼ inch. Preferably, baffle140is disposed on support plate162and the length of baffle140extends from support plate162to support plate154. The baffle140is made of porous material that absorbs sound. Preferably, the baffle140is made of porous stainless steel wool sandwiched between wire mesh. The stainless steel wool can be, e.g., medium grade 1 or 0, fine grade 00, 000 or 0000. The wire mesh is used to hold the steel wool and can have, e.g., a mesh size of 40×200. Of course, grades of steel wool and wire mesh sizes can be used as appropriate. In addition, other materials can be used for the baffle140such as, e.g., cloth screens, stainless steel wool between inner and outer wire cloth, perforated metals, metal foam having various geometries and pores per inch (PPI) densities, wire overlays, Scotch Brite and other screen mesh materials to name just a few. The porous material of baffle140helps in reducing the sound but unlike prior art nozzles, the baffle140does not cause a significant pressure drop and thus does not adversely affect the quick discharge of the inert gas needed to rapidly drop the oxygen level for fire suppression. This is because the restricting geometry for controlling the flow is still the orifice plate120disposed upstream of the nozzle inlet102. As discussed above, the sound absorbing device can also include inserts146and148. Preferably, the sound absorbing inserts146and148are disposed at a top end and a bottom end of the annular chamber135, respectively. The sound absorbing inserts146and148help reduce the interaction between the gas flow and the nozzle101. Preferably, the sound absorbing insert148is a disc with a diameter that extends to the sidewall of the baffle140. The insert148, along with insert146, provides lateral support for the baffle140. As seen inFIG.3, the insert148acts as a base for the inner tube126and plug138. Preferably, the sound absorbing insert146is a donut shaped disc with an inner diameter that circumscribes the inner tube126. The outer diameter of the insert146extend to the sidewall of baffle140and provides lateral support to baffle140. In some embodiments, the diameter of the sound absorbing insert148extends to the sidewall of the outer tube134(e.g., see insert148′ inFIG.6for comparison). In addition, the outer diameter of the insert146extends to the sidewall of the outer tube134(e.g., see insert146′ inFIG.6for comparison). In this case, the baffle14will be disposed between, e.g., sandwiched between the inserts146and148. That is, the baffle140will be disposed on the insert148rather than the support plate162as discussed above and the top of baffle140will extend to insert146rather than to the support plate154as discussed above. Although described as a disc and donut shaped disc, the shape of the inserted will depend on the shape of the inner and outer tubes126,134. The sound absorbing inserts146,148can be comprised of any suitable sound absorbing material such as, e.g., high temperature, high-density rigid fiberglass insulation. As seen inFIG.4, inner annular ring116is comprised of sound absorbing insert172. The annular ring116is secure to the outer tube134using known fastening means such as, e.g., clips or spiral retaining rings. The sound absorbing insert172further reduces the sound level of the inert gas as it flows from the first and second set of secondary outlets106and108and into the enclosure. Preferably, the thickness of sound absorbing insert172is in a range of 0.50 inch to 2.0 inch and more preferably, 1 inch. The sound absorbing insert172can be any appropriate sound absorbing material such as, e.g., fiberglass and mineral wool to name just a few. The second outer annular ring118is comprised of a support plate162and a sound absorbing insert164. The support plate162can be made of any appropriate material based on the temperature requirement of the application such as, e.g., metal, including aluminum, bronze and stainless steel, plastic, fiberglass and ceramic or composites thereof to name just a few. The sound absorbing insert164further reduces the sound level of the inert gas as it flows from the second set of secondary outlets108and into the enclosure. Preferably, the thickness of sound absorbing insert164is in a range of 0.25 inch to 1.00 inch and more preferably, 0.50 inch. The sound absorbing insert164can be any appropriate sound absorbing material such as, e.g., fiberglass and mineral wool to name just a few. The second outer annular disc118is attached to one end of the outer tube134with, e.g., a plurality of fasteners168or by some other means. First outer annular disc114includes a support plate154and a sound absorbing insert156. The support plate154can be made of any appropriate material based on the temperature requirement of the application such as, e.g., metal, including aluminum, bronze and stainless steel, plastic, fiberglass and ceramic or composites thereof to name just a few. The sound absorbing insert156further reduces the sound level of the inert gas as it flows from the first set of secondary outlets106and into the enclosure. Preferably, the thickness of sound absorbing insert156is in a range of 0.25 inch to 1.0 inch and more preferably, 0.5 inch. The sound absorbing insert156can be any appropriate sound absorbing material such as, e.g., fiberglass and mineral wool to name just a few. The first outer annular disc114is attached to another end portion of the outer tube134with, e.g., a plurality of fasteners160or by some other means. In another exemplary embodiment, as seen inFIG.5, the inner annular disc116′ includes a support plate170that attaches to a flange178. Flange178is secured to the outer tube134, e.g., by welding or by some other means that secures the flange178to outer tube134. Support plate170can be made of any appropriate material based on the temperature requirement of the application such as, e.g., metal, including aluminum, bronze and stainless steel, plastic, fiberglass and ceramic or composites thereof to name just a few. Support plate170is attached to flange178with a plurality of fasteners180. Inner annular disc116′ also includes a hoop176attached to the support plate170. A pair of sound absorbing inserts172′ and174′ are placed against the support plate170. The sound absorbing inserts172′ and174′ further reduce the sound level of the inert gas as it flows from the first and second set of secondary outlets106and108and into the enclosure. Inserts172′ and174′ may be tightly fit within the hoop176and/or retained within the hoop by a suitable adhesive. Clearance is provided for fasteners180and flange178by clearance cavities182and184formed in the inserts172′ and174′, respectively. Preferably, the thickness of each of sound absorbing inserts172′,174′ is in a range of 0.25 inch to 1.0 inch and more preferably, 0.5 inch. The sound absorbing inserts172′,174′ can be any appropriate sound absorbing material such as, e.g., fiberglass and mineral wool to name just a few. The second outer annular ring118′ is comprised of a support plate162, a hoop166, and a sound absorbing insert164. Insert162may be tightly fit within the hoop166and/or retained within the hoop by a suitable adhesive. The remaining structure of annular ring118′ is similar to annular ring118discussed above and thus, for brevity, will be omitted. First outer annular disc114′ includes a support plate154, a surrounding hoop158and a sound absorbing insert156. Insert156may be tightly fit within the hoop158and/or retained within the hoop by a suitable adhesive. The remaining structure of annular ring114′ is similar to annular ring114discussed above and thus, for brevity, will be omitted. When the fire suppression system is operated, as seen in, e.g., the exemplary embodiment ofFIG.4, a high velocity fluid flow F passes through orifice122and is received into passageway128. The fluid flow F is then redirected in a direction transverse to the longitudinal passage128by plug138(and/or the sound absorbing body136in some embodiments) such that the fluid flow F passes through the radially facing primary outlets132. As fluid flow F flows through the primary outlets132, it is divided in chamber135into first and second fluid flow portions F1and F2, respectively. In some embodiments, first fluid flow portion F1and second fluid flow portion F2are balanced. Preferably, the fluid flow portions F1and F2are balanced regardless of the orientation and configuration of the outlets along the longitudinal axis of the chamber135. Preferably, a ratio between the maximum flow value and the minimum flow value between the two balanced fluid flow portions F1and F2is less than 70:30, and more preferably, less than 60:40, and even more preferably, the two balanced gas flow portions F1and F2are substantially equal. In some embodiments, the fluid flows F1and F2are balanced by the location of the first and second set of secondary outlets106and108with respect to the primary outlets132. In embodiments that use inner ring200(seeFIG.6), the inner ring200can be adjusted up or down to adjust the flow. In still other embodiments, the balancing is affected by adjusting the size of the fluid flow area for each of the secondary outlets106,108. Turning to the embodiment ofFIG.4, before flowing through the first and second secondary outlets106,108, however, the first and second fluid flow portions F1and F2pass through the sound absorbing baffle140. The sound absorbing baffle140reduces the sound in the fluid flow portions F1and F2, but unlike prior art nozzles, the baffle140does not significantly reduce the flow of fluid flow portions F1and F2. Preferably, a pressure from the inlet102of the nozzle (after the orifice plate120) is no more than 80 psi higher than the gage pressure of enclosure50. After flowing through the baffle140, the fluid flow portions F1and F2flow through the first and second secondary outlets106,108, respectively. As it exits the first secondary outlets106, the first fluid flow portion F1is directed between sound absorbing surfaces190and192of inserts156and172, respectively, which further reduce the sound. Similarly, as it exits the second secondary outlets108, the second fluid flow portion F2is directed between sound absorbing surfaces194and196of inserts172and164, respectively, which further reduce the sound. As shown inFIG.4, nozzle101has an overall height H and an overall diameter d4. The inlet passageway128has an inlet102with a diameter d2and the outer tube134has an inner diameter d3. Annular discs114,118have a sound absorbing insert thickness T and annular disc116has a sound absorbing insert thickness of 2T and each sound absorbing surface192-196is spaced apart by a distance Z. In some embodiments, both the thickness T and spacing Z are in arrange of approximately 0.25 inch to 1.0 inch, and preferably 0.50 inch. In at least one embodiment, the height H is in a range of approximately 4 inches to 9 inches, and preferably 5.5 inches. The diameter d4is in a range of approximately 6 inches to 13 inches and preferably 5.5 inches. The inner tube diameter d2is in a range of approximately 1.25 inches to 1.75 inches and preferably 1.5 inches. The outer tube diameter d3is in a range of approximately 3 inches to 4 inches and preferably 3.81 inches. In some embodiments, the following ratios can apply to the dimensions of the nozzle: d4/d1, relates the diameter of the nozzle to the inert gas flow is greater than 15 and preferably in a range of approximately 15 to 30; d3/d2, which ensures the chamber135is sufficiently large enough for the inert gas flow is in a range of approximately 2 to 3; and d4/T, which ensures a sufficient sound absorbing capacity at the outlet of the nozzle is less than 20. Although the low pressure drop acoustic suppressor nozzle100is shown and described in the above exemplary embodiments as having cylindrical components, other suitable shapes can be used to construct the nozzle components. In addition, although the above exemplary embodiments were described with a sound absorbing device having a porous baffle140, some embodiments of the sound absorbing device do not use a porous baffle. For example, in some embodiments, the sound absorbing device in the annual chamber135can include a non-porous material can be used to divert the flow of gas from primary outlets130to secondary outlets106,108. For example,FIG.6illustrates an embodiments in which the sound absorbing device includes a non-porous sound absorbing ring (or rings). Because many of the structures and features of the nozzle ofFIG.6is similar to the structures and features discussed above with respect toFIGS.2-5, for brevity, a detailed description of the common features discussed above is omitted. As shown inFIG.6, a sound absorbing body136is disposed in the passageway128to reduce the interaction between the entering gas and the nozzle and to reduce the sound caused by the vibration of the nozzle. The set of primary outlets130can be located above the sound absorbing body136to help balance the amount of gas flowing through the primary apertures132and reduce the velocity of the gas flow. When the gas exits the passageway128through primary outlets130, a pair of sound absorbing rings200are positioned inside the annular chamber135between the first and second sets of secondary outlets106and108. Accordingly, the sound absorbing rings200surround the radially facing primary outlets132. The sound absorbing rings200reduce the interaction between the gas flow and the outer tube134. In some embodiments, the sound absorbing rings200can be adjusted in size and position to help balance the gas flow through the first and second sets of secondary outlets106and108. The fluid flows may be balanced by moving the rings200up and down with respect to the primary outlets132. In some embodiments, the fluid flows are balanced by the location of the first and second set of secondary outlets106and108with respect to the primary outlets132. In still other embodiments, the balancing is affected by the size of the secondary outlets. Preferably, the nozzle provides the balanced flow regardless of the orientation and configuration of the secondary outlets106and108along the longitudinal axis of the chamber135. Preferably, a ratio between the maximum flow value and the minimum flow value between the two balanced fluid flow portions is less than 70:30, and more preferably, less than 60:40, and even more preferably, the two balanced gas flow portions are substantially equal. The sound absorbing rings200can be retained in the outer tube134with washers202and snap rings204, for example. Although the rings200are described as two separate rings, in some embodiments the pair of sound absorbing rings can be combined into a single unitary body. Annular chamber135includes sound absorbing inserts146′ and148′ positioned at the ends of the chamber to help reduce the interaction between the gas flow and the nozzle. The configurations of the inserts146′ and148′ in the chamber135can be similar to the configuration of inserts146and148and thus for brevity will not be discussed further. The sound absorbing body136and rings200can be comprised of any suitable sound absorbing material such as, e.g., fiberglass or mineral wool to name just a few. In some embodiments, depending on the application, the inventive nozzle does not include the baffle140, the sound absorbing body136or the sound absorbing rings200. Although described separately in the above exemplary embodiments, some embodiments can include both the baffle140and ring200. In addition, some embodiments do not include either the baffle140or the ring200. The exemplary embodiments discussed above are directed to a configuration having two flow portions exiting the nozzle through respective sets of outlet holes. However, exemplary embodiments of the nozzle are not limited to this configuration. In some embodiments, the nozzle can be configured with more than two sets of secondary outlet holes similar to outlets106and108. In still other embodiments, the chamber135has one set of secondary outlet holes which are disposed along a longitudinal axis of chamber135. Preferably, the exemplary nozzles are configured to provide balanced flow regardless of the orientation and configuration of the plurality of outlet holes along the longitudinal axis. For example, the nozzles are configured such that gas exiting a plurality of outlet holes is balanced such that a ratio between a maximum flow value in the plurality of outlet holes and a minimum flow value in the plurality of outlet holes is less than 70:30, and more preferably 60:40 and even more preferably substantially equal. In the above exemplary embodiments, the sound power of nozzle101is no greater than 130 dB for a frequency range from 500 to 10,000 Hz for inert gas flow rates in a range of approximately 1,000 CFM to approximately 5,400 CFM while conforming to the standards in UL 2127. In some exemplary embodiments, the peak value of the sound power level of nozzle101is no greater than 130 dB, preferably no greater than 120 dB, and more preferably no greater than 111 dB, for a frequency range from 500 to 10,000 Hz for inert gas flow rates in a range of approximately 950 CFM to approximately 5,400 CFM while conforming to the standards in UL 2127. In some exemplary embodiments, the peak sound power level of nozzle101is in a range between 111 dB to 130 dB, for a frequency range from 500 to 10,000 Hz for inert gas flow rates in a range of approximately 950 CFM to approximately 5,400 CFM while conforming to the standards in UL 2127. For example,FIG.7shows a chart illustrating sound power level in dB vs. frequency in Hz for various embodiments with and without baffle140and with and without an offset for the orifice plate120. For the embodiments shown inFIG.7, INERGEN gas at a flow rate of 2,188 CFM and an orifice of 0.368 was used. The line A represents a graph of a sound level vs. frequency at which failure of a hard drive is believed to occur. Line B represents a graph of the sound level vs. frequency at which a 50% degradation of the performance of the hard drive is believed to occur. As seen inFIG.7, exemplary embodiments of the present disclosure reduce the sound power levels such that they are at 130 dB or below, i.e., below the level at which failure of the HDD is believed to occur, for frequencies from 500 to 10,000 Hz. For example, Line C represents a nozzle which does not include either a remotely disposed orifice plate or a baffle with sound absorbing material. The sound power level for this embodiment never reaches the believed failure point of Line A. Some exemplary embodiments provide even better results with sound power levels they are below 125 dB. For example, Line D represents a nozzle in which an orifice plate is disposed 41 inches upstream from the inlet to the nozzle but does not have a baffle with sound absorbing material. The sound power level for Line D is generally better than Line C, especially from 500 to approximately 5,000 Hz, and Line D has a peak value at 1000 Hz that is less than 125 dB. The sound power level of the exemplary embodiment represented by Line D is also at or below the 50% degradation Line B for frequencies in a range of approximately 500 to 800 Hz and approximately 2000 to 10,000 Hz. Line E represents a nozzle that includes a baffle with sound absorbing material but the orifice plate is not remotely disposed. The sound power level for Line E is better than Line D for frequencies ranging from approximately 800 to 10,000 Hz, and the peak value of Line E at 500 Hz is also below 125 dB. In addition, the sound level for Line E is below the 50% degradation Line B from approximately 1,600 to 10,000 Hz and significantly below Line B from approximately 2,000 to 10,000 Hz. Further exemplary embodiments provide even sound power levels that are at 108.6 dB or below. For example, Line F embodies a nozzle that is disposed 41 inches upstream from the inlet to the nozzle and includes a baffle with sound absorbing material in the nozzle. As seen inFIG.7, except for a short peak of approximately 108.6 dB at approximately 1,000 Hz where Line F just touches the 50% degradation Line B, Line F is significantly below the 50% degradation Line B for all other frequencies. As discussed above, hard disk drives are susceptible to sound, and a high sound level can lead to degradation or, in some cases, failure. The exemplary embodiments disclosed above reduce or minimize the probability of degradation or failure of the hard disk drives while conforming to the standards in UL 2127. For example, in some embodiments, the sound power from the acoustic nozzle101is no greater than 125 dB for a frequency range from 500 to 10,000 Hz for a coverage area up to 36 ft.×36 ft., and more preferably up to 32 ft.×32 ft., and more preferably, no greater than 120 dB. It is believed that there is no related art fire suppression nozzle meeting the UL 2127 standard generates a sound power level that is at 125 dB or less at any coverage area up to 36 ft.×36 ft., and more preferably up to 32 ft.×32 ft. In some exemplary embodiments, the acoustic nozzle101is no greater than 130 dB, and more preferably, no greater than 108.6 dB, for a frequency range from 500 to 10,000 Hz for a coverage area up to 36 ft.×36 ft. and more preferably up to 32 ft.×32 ft. In the above exemplary embodiments, the maximum protection height of the acoustic nozzle101is up to 20 ft. While the present invention has been disclosed with reference to certain embodiments, numerous modifications, alterations, and changes to the described embodiments are possible without departing from the sphere and scope of the present invention, as defined in the appended claims. Accordingly, it is intended that the present invention not be limited to the described embodiments, but that it has the full scope defined by the language of the following claims, and equivalents thereof.
36,398
11857818
DETAILED DESCRIPTION Pump trucks and the associated pump equipment are complicated machinery. Skilled technicians working on the equipment use information from many different sources to confirm that the pump is working correctly, from visual information from gages and video displays, sound from the pump machinery, and tactile inputs such as vibrations in the pump housings and water pressures, temperatures, and oscillations in the fire hoses. A video screen only simulation would miss out on the complexities of the multiple senses needed to adequately monitor the pump equipment. Additionally, sound generated by electronic speakers is different from sounds generated from actual functioning equipment, missing directional queues and accompanying vibrations that give the operator a more complete sense of how the equipment is functioning. A pump operation panel simulator and simulation method is provided including a range of sensory signals. Embodiments of disclosed methods include heating and/or cooling of primary pump inlet for pump operation panel training devices. During the operation of a water pump, lack of knowledge, training and experience on the part of the pump operator can lead to the pump overheating which can result in equipment damage that is expensive to repair. To get a sense of the pump temperature the pump operator touches the pump's primary intake components. Exposing the pump operator to a training simulator device that provides tactile feeling of heat and cold at the pump intake provides the student operator a more realistic training experience. Thermoelectric device is used to provide heating and cooling of the pump intake components. During a training exercise, the simulated pump computes the pump temperature as a function of water flow rate through the pump and pump rotational velocity. Pump temperature is transmitted to the thermoelectric cooling system via the training devices input/output system. After the completion of a training exercise, the training device's pump inlet hardware must be driven to temperature necessary to start the next training exercise. Inflating/deflating soft inlet hose line can also be an object of the training simulation. During the operation of a fire truck pump, the operator can be directed to sense water source pressure by the tactile feel of the water source feed hose inflation against his or her leg. A pump operator training device that provides this tactile feel gives the student pump operator a more immersive training experience. A short length of inlet soft hose fitted internally with a bladder and electrically actuated valves connected to a compressed air supply such that the inflation can be remotely controlled. The simulation of the hydrant pressure and water flow rate in the water source line is used compute the pressure necessary to inflate the bladder within the short length of inlet hose. A compressed air source is used to inflate the bladder. Additional features can include a tandem/relay operation and dual pumping simulation. In real life fire-fighting, often a water source is not ideally located relative to the fire. Either the water source is too far away or height differential provides too much pressure loss to effectively fight the fire. To overcome these problems, multiple pumping trucks or pumping devices can be used in sequence and/or in parallel to feed water from the water source to a location where it is needed. An embodiment of the disclosed simulator can include a plurality of pump panel training devices to simulate the connection of multiple pumping devices to deliver water from a water source to a destination where it is needed. Training for this kind of situation can be accomplished by using pump panel training devices that can support tandem/relay and dual pumping operations. In one embodiment, a simulation system can couple two or more pump panel training devices via network connection. Such a network of connected panel training device can provide a team training environment where one pump operator on one apparatus must compensate for the actions of another pump operator on another apparatus. In one exemplary operation where two pump panel training devices are utilized to simulate two pumping devices connected in sequence, a first pump panel training device can be utilized to simulate a first pumping device drawing water from a water source and supplying an intermediate flow of water to a second pumping device, and a second pump panel training device can be utilized to simulate the second pumping device, drawing water from the intermediate flow of water and delivering the water to a location that needs the water. Parameters regarding the water source can be provided to the first pump panel training device, and inputs to the first pump panel training device by a first trainee can be used to determine simulated operating parameters for the first pump panel training device including properties of the intermediate flow of water, including flow rate, temperature, water pressure, etc. These properties of the intermediate flow of water are provided by communication network to the second pump panel training device. These properties of the intermediate flow of water in combination with inputs to the second pump panel training device by a second trainee can be used to determine simulated operating parameters for the second pump panel training device including properties of an outlet flow of water to be supplied to the destination that needs the water. Simulation of a sequence of two pumping devices is provided as an example of how the disclosed simulator may be operated, multiple variations of this example are envisioned, and the disclosure is not intended to be limited to the particular examples provided. Input parameters to a simulation event can be preset as parameters to one of a sequence of training programs. In one exemplary embodiment, a series of pre-programmed simulation events of increasing difficulty can be provided with a pump panel training device. In another exemplary embodiment, a randomized simulation event can be operated, for example, with a trainee or an instructor being given an ability to establish a range of values or a likelihood of events occurring as part of the randomized simulation event. In another exemplary embodiment, an instructor operating a remote computerized device such as a laptop computer or a smart-phone can be given a supervisory application, enabling the instructor to monitor performance of the trainee and control inputs to the program, for example, prompting simulated occurrences such as an acute hose failure or interruption of a water source during a simulation event. Such a remote computerized device can be described as an instructor operator station. According to one embodiment, the primary responsibilities of the instructor can include selecting a pre-programmed simulation event, setting a hose configuration for a simulation event, monitoring progress of a simulation event, generating simulated malfunctions in simulated equipment during a simulation event, selecting programmed options for a simulation event, and preparing reports to summarize performance during a simulation event. Reports to summarize a trainee's performance can rate an effectiveness of delivering water to the location needing water (for example, summarizing an estimate of gallons delivered), describe equipment failures caused by the trainee, and failure by the trainee to follow particular instructions such as turning a discharge handle too quickly. According to one embodiment, a simulation event status display can be provided to the instructor, providing the instructor with copies of the gages visible to the trainee, representations of all control inputs manipulated by the trainee, a summary of all actions taken so far by the trainee, a summary of upcoming programmed events in the simulation event yet to occur, and options for making any equipment within the pump panel training device fail upon command of the instructor. For example, the instructor can be provided with a touch screen display, and the instructor can make any one of the gages visible to the trainee fail by touching the area of the touch screen display showing the gage. Input parameters to a simulation event can include a wide variety of factors, including but not limited to information about water sources, water tank level, foam tank level, pump engine operation status, pump engine temperature, hose diameters, hose lengths, water usage of an outlet flow (for example, on/off status for one or more spray nozzles), ambient temperature and other ambient weather effects, water temperature, equipment failure status (for example, timed occurrence of hose blockage), and programmed events such as a radio message being played at a particular time. Simulation event outputs can include gage readings; graphical images displayed upon an inset video monitor; graphical images displayed upon an external video monitor; sound outputs provided through an audio system; sound outputs provided through a simulated emergency radio system; vibratory outputs simulated, for example, through operation of a motor including an offset weight upon an motor output shaft; simulated water inlet parameters such as temperature, vibration, and hose inflation; simulated water outlet parameters such as temperature, vibration, and hose inflation; and warning and status lights. A variety of control inputs can incorporated in or attached to a pump panel training device. For example, a pump panel training device can include a lever, knob, or other control input for opening and closing a primary water inlet; a lever, knob, other control input for opening and closing a secondary water inlet; a lever, knob, other control input for opening and closing a pump inlet; a lever, knob, other control input for opening and closing each of a plurality of water outlets or discharge lines; a lever, knob, other control input for opening and closing each of a plurality of pre-connect lines; a lever, knob, other control input for opening and closing an on-board tank fill and recirculating line; a lever, knob, other control input for opening and closing an on-board tank to pump line; a lever, knob, other control input for opening and closing drain valves; a lever, knob, other control input for controlling priming the pump (removing air from a water pump inlet); a knob or other control input for controlling a pump engine throttle; controls operable to control flow of a fire-fighting foam to a pump; controls associated with a two stage pump; a horn control switch; and a knob or other control input for controlling a relief valve useful to control pump pressure. Other control inputs related to various other actual pump panel functions can additionally or alternatively be utilized. A control useful to simulate control of engine throttle can in one example include a n “OK-to-pump” indicator. Visual outputs of a pump panel training device can a number of exemplary outputs, including but not limited to: gages indicating simulated water pressure in a number of attached lines; gages indicating engine water temperature, engine oil pressure, and engine rotational speed; gages and associated controls simulating an engine governor device; gages indicating status of an on-board water tank and an on-board foam tank; Audio outputs provided by a pump panel training device can include but are not limited to engine sounds that respond to throttle and pump load, primer sounds, cavitation sounds, tank fill overflow sounds, open drain valve sounds, pressure relief valve operation sounds, and warning sounds. Additionally or alternatively, the audio system can include an intercom permitting an instructor to interact with a trainee. An inset video monitor can be attached to a face of a pump panel training device and can be used to simulate a variety of control panels that can be present upon an actual pump panel. Such an inset video monitor can include a touch screen display, enabling a trainee to interact with the controls that can be present upon an actual pump panel. An external video monitor can be attached proximate to a pump panel training device, for example, with a rotating and/or extending arm enabling movement of the external video monitor relative to the pump panel training device. In this way, a trainee can be presented with an external video monitor in a separate viewing direction away from the pump panel training device, such that the trainee can be required to split attention between the external video monitor and the pump panel training device to simulate complicated scenarios that can occur relative to an actual pump panel. For example, an external video monitor can display a representation of a fire hydrant hookup, a dump-tank hookup, a scene including simulated fire-fighters, a scene including hoses leading up to and leading away from the pump panel training device, a scene including a distant second fire truck including a pump panel, or other similar environmental scenes that can require the attention of a pump panel operator. In one embodiment, the external video monitor can display a hose and nozzle layout schematic, providing the trainee with necessary information to understand the network of hoses attached to the pump panel training device. Simulated hoses and hose systems can include a variety of embodiments. In one exemplary embodiment, a hose section of one to a few feet in length can include a threaded connection fitting on a first end (operable to be attached to an inlet or outlet fitting upon a pump panel training device) and a crimped or sealed end upon a second end. Such a sealed hose section attached to the pump panel training device can be filled with air or water, and that air or water can be heated, cooled, vibrated, and/or pressurized to simulate various conditions in the hose. In another exemplary embodiment, a length of hose with threaded fitting on both ends, can be attached to the pump panel training device, and water or air can be cycled through the hose to simulate various conditions in the hose. In one embodiment, an auxiliary pump device can be used to cycle water or air through the hose and condition the water or air (control temperature, create vibrations, control pressure, etc.) A simulation event can be programmed in order to simulate attachment of hoses to a fire department connection (FDC) system, which can include a connection on an outside of a building which enables connected hoses to supply water to a building's interior standpipe or sprinkler system. Maintaining water pressure within a selected range can be required for proper operation of an FDC system. Parameters of a simulation event can instruct a trainee regarding how to maintain a discharge water pressure within such a selected range under varying conditions. In one exemplary embodiment, proper operation of the pump panel training device depends upon a simulated water demand, which can vary throughout a simulation event based upon which and how many sprinkler zones within a building are currently requiring a water flow. Compressed air foam can be utilized in fire-fighting. The disclosed pump panel training device can simulate use of a compressed air foam system. A control input simulating control over a compressed air foam system can include an air valve control for each discharge A dry hydrant is a device used in actual settings that includes pipes connected to a water source such as a pond. In contrast to fire hydrants in urban areas where the fire hydrant has access to pressurized water, dry hydrants are initially without water and enable a fire truck to draw water through the attached pipe from the water source to the dry hydrant. A simulation event can be programmed to test procedures to access water through a dry hydrant. A simulation event builder program can be made available to an instructor, providing the instructor with an ability to program and save customized simulation events. In one embodiment, the disclosed simulator can be operated in cooperation with an emergency vehicle driving simulator, such that performance and decisions made in the driving simulator affect parameters in the simulation incorporating the pump panel training device. For example, controls in a driver's cockpit must be activated prior to operation of a pump panel starting. A simulation event can coordinate between the simulators to determine and require that proper activation of the simulated controls in the cockpit occur prior to operation of the simulated pump panel. In another example, distance from a fire hydrant, distance from a fire-fighting scene, location in relation to a second truck with a pump panel, elevation of the truck in relation to a water source, and location in relation to a loud piece of equipment can all be used to simulate conditions in a simulation event. A data link between computerized controls of the driving simulation and the disclosed simulation incorporating the pump panel training device can exchange data related to various parameters, including but not limited to simulated engine RPM, power-take-off engagement, engine oil pressure, water temperature, and other data related to running or pausing the simulation event. Referring now to the drawings, wherein the showings are for the purpose of illustrating certain exemplary embodiments only and not for the purpose of limiting the same,FIG.1schematically illustrates in a front view an exemplary first embodiment of a pump panel training device including a side mount pump panel device. Pump panel training device10is illustrated including cabinet20, display monitor30, and audio speaker40. Pump panel training device10includes a plurality of simulated controls, simulated outputs, and simulated connection fittings useful to train a trainee in operation of an actual pump panel device. Simulated connection fittings include intake connection50, simulating an exemplary five inch diameter hose connection; preconnection fitting80, simulating an exemplary 2.5 inch diameter hose connection; discharge connection86, simulating an additional exemplary 2.5 inch diameter hose connection; and discharge connection87, simulating an additional exemplary 2.5 inch diameter hose connection. Simulated outputs include master intake pressure gage60, master discharge pressure gage61, driver side preconnect pressure gage62, driver side preconnect pressure gage63, passenger side discharge (No. 2) pressure gage64, deluge discharge pressure gage65, driver side discharge (No. 1) pressure gage70, and driver side discharge (No. 3) pressure gage71. Simulated controls include driver side preconnect control knob66, driver side preconnect control knob67, passenger side discharge (No. 2) control knob68, deluge discharge control knob69, driver side discharge (No. 1) control knob72, and driver side discharge (No. 3) control knob73. Additionally, driver side discharge (No. 1) control knob82, driver side discharge (No. 3) control knob83, driver side preconnect control knob84, and driver side preconnect control knob85are provided. Additionally, a pump intake shut-off lever81is provided. Additionally, an intake control valve knob52is provided. Additionally, an intake tap fitting54is provided. Additionally, pump priming controls78, engine throttle controls74, valve control knob75, transfer valve control switch79, tank file and recirculating line control knob76, on-board tank to pump control knob77, and manual pump priming control knob90are provided. Additionally, informational placard88and informational placard89are illustrated, providing important information for operation of the device. Cabinet20includes a metallic box operable to house components of pump panel training device10. Cabinet20can include wheels22operable to permit movement of pump panel training device10. Display30permits pump panel training device10to illustrate complex or optional displays depending upon a configuration of the simulation event being operated. For example, details regarding an optional compressed air foam system can be displayed upon display30. Display30can include an exemplary liquid crystal display. Display30can include a touch-screen device capable of displaying information and receiving inputs through a trainee touching different parts of display30. Display30can relay information to the trainee, for example, displaying a current level of an on-board water tank or a current level of an on-board foam tank. In one exemplary embodiment, at a conclusion of a simulation event, display30can display results of the simulation event to the trainee. Intake connection50is illustrated without any hose or hose portion connected to it for clarity of illustration. According to embodiments of the disclosure, a hose or a hose portion can be connected to intake connection50, the hose or hose portion can be filled with a substance such as air or water, and heat, pressure, and vibration of the hose or hose portion can be controlled through a simulation event. In one embodiment, a trainee receiving high marks in a simulation event can depend upon the trainee sensing conditions in the hose and changing control input settings based upon the sensed conditions. FIG.2schematically illustrates in a top view the pump panel training device ofFIG.1. Pump panel training device10is illustrated, including cabinet20, intake connection50, preconnection fitting80, discharge connection86, audio speaker40, pump intake shut-off lever81, intake control valve knob52, and driver side discharge (No. 1) control knob82. FIG.3schematically illustrates in a side view the pump panel training device ofFIG.1. Pump panel training device10is illustrated, including cabinet20, audio speaker40, intake connection50, and intake control valve knob52. The embodiment illustrated inFIGS.1-3is provided as an illustrative example of how a pump panel training device can be configured. It will be appreciated that a manufacturer of such a pump panel training device can change control, display, and connection fittings to simulate different actual pump panels. Actual controls, gages, and connection fittings can be fitted to cabinet20to increase realism, with internal electronics within cabinet20transforming computerized control signals and data within cabinet20into and from interactions with the controls, gages, and connection fittings situated upon the exterior of cabinet20. A wide variety of cabinet and component configurations are envisioned, and the disclosure is not intended to be limited to the particular examples provided herein. FIG.4schematically illustrates in a perspective view an exemplary second embodiment of a pump panel training device including a top mount pump panel device. Pump panel training device110is similar to pump panel training device10ofFIG.1, with the exception that pump panel training device110is operable to simulate an actual pump panel device that would be situated on a top of a fire truck instead of on a side of a fire truck. Pump panel training device110is illustrated including cabinet120, display130, audio speaker140, trainee stand170, and stand railing172. Pump panel training device110includes a plurality of control levers162and control knobs167operable to simulate levers and knobs that exist on a similar actual pump panel device. Further, pump panel training device110includes intake pressure gage161, discharge pressure gage161, and a plurality of pressure gages163operable to simulate pressure gages that exist on a similar actual pump panel device. Pump panel training device110further includes a pair of valve control knobs166, pump priming control165, engine throttle control164, and informational placard169. Pump panel training device110further includes intake connection150and connections154. Intake connection150includes an attached hose portion152including a hose portion with a round connection fitting on one end operable to attach to intake connection150and a sealed end on a second end. Cabinet120includes internal components that can fill hose portion152with a substance such as water or air and can heat, pressurize, and/or vibrate the substance to simulate conditions that occur on a hose connected to an actual pump panel during operation. FIG.5schematically illustrates in a side view the pump panel training device ofFIG.4. Pump panel training device110is illustrated including cabinet120, audio speaker140, trainee stand170, and stand railing172. Pump panel training device110includes a plurality of control levers162and control knobs167. Further, pump panel training device110includes plurality of pressure gages163. Pump panel training device110further includes valve control knob166. Pump panel training device110further includes intake connection150and connections154. Intake connection150includes an attached hose portion152. Pump panel training device110further includes an optional external display176. External display176can be attached to any embodiment of a pump panel training device and at various locations on the pump panel training devices. Multiple external displays can be utilized on a single pump panel training device. External display176can include graphical displays useful to convey important information to a trainee, for example, including information about a water source that is being simulated or a hose configuration that is being simulated. External display176can be attached to pump panel training device110with an articulable arm174. FIG.6illustrates an exemplary simulation system including a first pump panel training device, a second pump panel training device, and a supervisory computerized device. Simulation system200is illustrated including a first pump panel training device10A, a second pump panel training device10B, and supervisory computerized device210. In one embodiment, two pump panel training devices can be utilized in a same location, with two separate simulation events being operated in parallel, with a single supervisory computerized device controlling parameters of both simulation events. In another embodiment, first pump panel training device10A and a second pump panel training device10B can be used to operate a cooperative simulation event, for example, with first pump panel training device10A simulating drawing water from a water source and supplying an intermediate flow of water to second pump panel training device10B, which simulates receiving the intermediate flow of water from first pump panel training device10A and delivering the water to a destination that needs the water. Use of supervisory computerized device is optional. In some configurations, an instructor can enter parameters for an upcoming simulation event directly into a display of a pump panel training device without use of a separate supervisory computerized device. Supervisory computerized device210can include any computerized device including a desktop computer, a laptop computer, a tablet computer, a smart phone device, or other similar computerized device. FIG.7illustrates an exemplary alternative simulation system including a pump panel training device, a driver training device, a supervisory computerized device, and a remote server device. Simulation system300is illustrated including a pump panel training device10, a driver training device340, supervisory computerized device310, and remote server device320. Pump panel training device10and driver training device340can be used to operate a cooperative training event, for example, with performance of the driver trainee affecting parameters for the pump panel trainee. Use of a remote server device320is optional. A remote server device can be operated by a manufacturer of the system or by a large municipality operating a number of fire stations. Remote server device320can operate a pre-programmed set or sequence of simulation events. Remote server device320can provide a set of localized rules and protocols for particular regions or fire departments. Remote server device320can monitor and report simulation results. Remote server device320can provide assistance during simulation events, for example, with an expert standing by to provide guidance to trainees or technical experts standing by to answer questions about the system. FIG.8schematically illustrates components of an exemplary pump panel training device communicating over a communications bus. Pump panel training device data communication system400is illustrated. Computerized pump panel control device410is located within a pump panel training device and is illustrated including a computerized processor operable to operate code and provide functionality related to simulation events. Communications bus470is a device useful to provide data communication between components of a system. Gage controller420, control input controller430, display controller440, audio control450, and hose portion controller460are illustrated connected to communications bus470. Each of gage controller420, control input controller430, display controller440, audio control450, and hose portion controller460include electronic and/or electromechanical devices useful to provide functionality to the simulated gages, controls, and connection fittings of the pump panel training device. FIG.9schematically illustrates an exemplary computerized processor useful to operate a pump panel training device. Computerized pump panel control device410is illustrated, including processor device510, durable memory storage device550, communication device520, input module530, and output module540. Processor device510includes a computing device known in the art useful for operating programmed code. Processor device510includes RAM memory and can access stored data through connection to memory storage device550. Memory storage device550may include a hard drive, flash drive, or other similar device capable of receiving, storing, and providing access to digital data. Memory storage device can include user data, map data, equipment information, rules and procedures data, scores and results data, and any other data necessary to operate the disclosed simulation events. Processor device510includes programming modules including simulation event module512, device hardware module514, and scoring module516which represent programmed functions that are exemplary of processes that can be carried out within processor device510but are intended to be non-limiting examples of such processes. Simulation event module512includes programming and data operable to operate the described pump panel simulation events, monitor control inputs, determine event parameters such as resulting pressures in lines, and determine output data such as gage readings and hose portion control parameters. Device hardware module514includes programming to control and receive inputs from the various components of the pump panel training device, including but not limited to controlling gages and interpreting control input settings. Scoring module516compares simulated operation of the pump panel training simulator to programmed criteria. Modules512,514, and516can include any related programming and related processes and are intended only as non-limiting examples of how the system could be configured. Input module530include any devices or mechanisms useful to receive trainee and instructor input to modulate operation of the simulation event, and can include but are not limited to simulated knobs, levers, buttons, and inputs to a touch screen display. Output module540include any devices or mechanisms useful to provide outputs to display screens, gages, hose portions, audio speakers, and other devices necessary to provide output to the driver or instructor. Communication device520includes any wired or wireless communication system required to send and receive data from the computerized device. FIG.10schematically illustrates in cross-sectional view an exemplary hose portion connected to an exemplary intake connection of a pump panel training device, with components internal to the pump panel training device fluidly connected to the hose portion and operable to control temperature, pressure, and vibration of the hose portion. Cabinet outer surface620as part of a pump panel training device is illustrated including a connected intake connection650. Hose portion652is connected to intake connection650. Exemplary components internal to the illustrated pump panel training device are illustrated including a substance thermal control device630and a substance pressure control device640. Any number of various devices in the art can be used as thermal control device630, which can include heating coils, a coolant loop, and/or a refrigerant loop to selectively heat and cool water, air, or any other substance used to fill hose portion652. Any number of various devices in the art can be used as thermal control device630, which can include a mechanically driven piston642or any other mechanism useful for controlling pressure and/or vibration of the substance used to fill hose portion652. In one example, a compressor providing pressurized air along with a plurality of control valves can be used to selectively change pressure acting upon the substance used to fill hose portion652. FIG.11schematically illustrates an exemplary pump panel training device operable to be operated with exemplary virtual reality and augmented reality devices. Virtual reality or augmented reality can be utilized in combination with the disclosed pump panel training devices. Pump panel training device710is illustrated including cabinet720and audio speaker740. Displays and gages have been omitted from pump panel training device710, and instead a virtual reality headpiece760and a tablet computerized device770operable to operate augmented reality or mixed reality are illustrated. Tablet computerized device770is exemplary and can be substituted with any portable computerized device capable of operating augmented or mixed reality. Tablet computerized device770includes camera view angle772useful to capture images with device770. Either virtual reality headpiece760or tablet computerized device770can be utilized in cooperation with pump panel training device710to operate a simulation event, for example, with gages and the display being provided as rendered graphics upon either virtual reality headpiece760or tablet computerized device770. Visual tokens730A and730B are provided as exemplary QR codes and enable a computerized controller to coordinate movement of virtual reality headpiece760or tablet computerized device770with the rendered graphics, such that a user can still interact with pump panel training device710and the various control inputs thereupon while viewing simulation details upon virtual reality headpiece760or tablet computerized device770. Similarly, a visual token730C embodied as a logo upon hose portion752is provided to enable the computerized controller to coordinate movement of virtual reality headpiece760or tablet computerized device770with the rendered graphics, such that a user can still interact with hose portion752and receive temperature, pressure, and vibratory sensations therefrom. FIG.12is a flowchart illustrating an exemplary process to provide tactile outputs to a user related to a pump panel simulation. Process800starts at step802. At step804, the system initiates the simulation, including utilizing any parameters that are pre-programmed for the simulation and/or selected by a trainer supervising the simulation. At step806, simulated outputs generated by the simulation are displayed to the user, for example, with pressure gage readings, audio outputs, and warning lights providing a user with a simulated status of a pump panel. At step808, inputs from the user regarding the simulation and simulated control of the pump panel are monitored and simulation results based upon the inputs are generated, for example, simulating an effect of activating a particular valve upon water pressures throughout the system. At step810, a determination is made whether the simulated results warrant a tactile output, for example, changing a pressure within a hose portion attached to the pump panel. If no tactile output is warranted, the process advances to step814. If a tactile output is warranted, the process advances to step812where components or devices available to and controlled by the simulation create the warranted tactile outputs. At step814, a determination is made whether the simulation is concluded. If the simulation is not concluded, the process returns to step806to reiterate steps of the simulation. If the simulation is concluded, then the process ends at step816. Process800is provided as an exemplary process that can be utilized to operate a pump panel simulator in accordance with the present disclosure. Other embodiments of the process are envisioned in accordance with the disclosure, and the disclosure is not intended to be limited to the examples provided herein. FIG.13schematically illustrates an exemplary alternative configuration of a pump panel training device used in tandem with a water hose circuit. System900is illustrated including pump panel training device910including simulated water hose952operable to provide a tactile output to a user of pump panel training device910. In order to provide a realistic tactile output, simulated water hose952is part of water hose circuit940operable to circulate water through simulated water hose952. Water hose circuit940includes supply hose930, return hose932, and water pump and conditioning unit920. Water pump and conditioning unit920is illustrated as a separate physical device from pump panel training device910. In some embodiments, water pump and conditioning unit920may be integral with pump panel training device910, with water flowing through an external loop of water hose including supply hose930. Water pump and conditioning unit920may include water pump922operable to create water pressure within supply hose930and circulate water through water hose circuit940. Water pump and conditioning unit920may include water temperature control device924operable to heat and/or cool water flowing through water hose circuit940and may include elements such as electric heating coils and a refrigerant cooling circuit to effect changes to water temperature. In one embodiment, water temperature control device924can include one or more water reservoirs useful to maintain a quantity of water at a certain temperature to increase an ability of the simulation to quickly deliver a change to water temperature. Water pump and conditioning unit920may include water vibration control device926operable to create pulses or rapid pressure variations in the water flow through water hose circuit940. Water hose circuit940is provided as an exemplary embodiment, and the disclosure is not intended to be limited to the examples provided herein. The disclosure has described certain preferred embodiments and modifications of those embodiments. Further modifications and alterations may occur to others upon reading and understanding the specification. Therefore, it is intended that the disclosure not be limited to the particular embodiment(s) disclosed as the best mode contemplated for carrying out this disclosure, but that the disclosure will include all embodiments falling within the scope of the appended claims.
39,531
11857819
DETAILED DESCRIPTION Referring to the illustrated assemblies ofFIGS.1-29, one example embodiment of an improved exercise machine or reformer30is presented. The present exercise machine30can be used in various methods of exercise, and preferably, with Pilates-style fitness regimens. An example Pilates reformer is described in U.S. patent application Ser. No. 15/213,258, for “Pilates Exercise Machine,” issued as U.S. Pat. No. 10,046,193 to Aronson, et al., which is incorporated by reference in its entirety. A reformer is a type of exercise machine which may have a frame supporting two parallel tracks along which a wheeled carriage can travel. Springs or other resistance members can be used to a resiliently bias the carriage towards one end of the frame. A user typically sits or lies on the carriage and pushes against a foot bar to move the carriage away from the foot bar. Alternatively, the user can grasp the ends of a pair of ropes or straps that pass through pulleys on the frame and are attached to the carriage to move the carriage along the tracks. Existing reformers present issues with changing resistance levels, changing the machine configuration to accommodate differing exercises, adjusting the absolute rope lengths and the lengths of ropes relative to one another, and so on. One or more benefits are provided herein (potentially including other aspects and/or benefits not listed here), is an exercise machine that is easy to use, by providing mechanisms that allow the user to easily change the machine's configuration and make adjustments as the user moves seamlessly from one exercise to another. Looking atFIGS.1-3, an example embodiment of the present exercise machine30generally includes a frame assembly32including rails40,42, a translating carriage62, which rolls longitudinally atop the rails40,42between the front end88and back end90of the exercise machine90. Near the front end88is a front platform46and a foot bar44which can be tilted about the frame assembly32. Near the back end90is a height adjustable seat56and foot pedals58,60. Also, near the back end90is a pair of handle bars52,54(which can also be used as foot bars in at least one configuration), supported respectively by vertical handle bar posts76,78. FIG.1illustrates the seat56in the lowered configuration, where the seat56is substantially level with the translating carriage62and the front platform46(e.g., less than 1″ or less than 0.5″ in height difference). One portion of the user's body may be supported on the translating carriage62, while another portion of the body may be supported by either the front platform46, when closed, or the seat56, while in the lowered configuration. Normally, the translating platform/carriage62is permitted to freely roll along the rails40,42, (as indicated by arrow84), but may be selectively connected by one or more resistance springs45to the frame assembly32. The resistance springs45resistively connect the translating carriage62to the frame assembly32, so that the translating carriage62is spring-biased towards the front end88. The user must overcome the spring bias in order to move the translating carriage62towards the back end90. The resistance level may be adjusted by connecting a chosen number of resistance springs45or a specific resistance spring45to the frame assembly32. The translating carriage62generally includes two shoulder rests62,68, as well as a strap extending across the top of the translating carriage62, which may be used to hold the user's feet while exercising or for other purposes. The foot bar44is generally U-shaped, with a straight horizontal section and two vertical sections which each connect to the frame assembly32through tilt adjustment mechanisms100. The straight horizontal section is preferably encased in a grip material, such as foam rubber or other cushioning and gripping material. The angle or tilt of the foot bar44may be adjusted relative to vertical. For example, in a first position, the foot bar44may extend vertically, as shown inFIG.1. Additionally, the foot bar44may be angled towards the front end88or towards the back end90. In either of the above positions, the foot bar44is held firmly at a selected tilt angle by the tilt adjustment mechanisms100, such that the user may perform various exercises by contacting the foot bar44. When desired, the foot bar44may be tilted to a horizontal stowed positon, extending towards the front end88, such that the user may perform exercises not requiring the foot bar44, as will be described in greater detail below in reference toFIGS.7-10and18-20. The present exercise machine30also generally includes a balance bar50hung beneath the rail40. When removed, the balance bar50can be held in one hand with the end of the balance bar50(usually a rubber foot) is rested upon the floor to enable the exerciser to maintain balance during standing exercises or other precarious exercises. Seen just beneath the translating carriage62, is the jump board assembly74in the stowed position, where the translating carriage can roll above the jump board assembly74without interference. A resistance ring48is removably mounted to the jump board assembly74by ring mounts49. Two side skirts70,72(made of metal, plastic, etc.) are mounted beneath respective rails40,42, to enhance looks, add rigidity, and to protect the mechanisms therebehind from damage and debris. Further, a rope length adjustment assembly96is secured to the underside of the translating carriage62, for changing the length of one or more of the ropes. Beneath the height adjustable seat56is a foot strap mechanism346that includes a rotating pulley head348that allows the pulley to spin relative to the telescoping extension bar350(once the pull pin351is released) that extends rearwardly (as indicated by arrow352) to permit attachment of the tensioned ankle strap and cable (not shown) to the exerciser. Further details present exercise machine30include two notches92,94formed in the height adjustable seat56to permit the exerciser to gain access to the height adjustment paddle beneath the seat56, which enables the exerciser to change the height of the seat56. Furthermore, a pedal assembly57is positioned beneath the height adjustable seat56, where either or both of the pedals58,60, can be pushed down against resistance when the height adjustable seat56is in the raised position. Additionally, a weight tray98is mounted to the frame assembly32, beneath the path of the translating carriage62, for hold various dumbbells and other exercise equipment. Turning now toFIGS.4-6, a pedal resistance adjustment mechanism101is illustrated. Because there is great difficulty in changing resistance levels when pedals are under resistance, the present mechanism101automatically relieves the tension in the resistance cable108when the pedal is in the initial position (with the pedal58in the highest or near highest position) to permit the adjustment in resistance level to be made. Referring also toFIGS.18and20, resistance to the pedals58,60is provided by a resistance source, in this example embodiment two extension springs340,342, each connected at one end to the frame32through the pedal spring bracket344, with the opposite ends being connected to resistance cable108(or other appropriate linkage, flexible or substantially rigid), such that the spring force produced by extending the extension springs340,342produces a tension in the cable108. Generally, the extension springs340,342are optionally pre-stretched to produce a continuous tension on the resistance side110of the resistance cable108even when not in use, which keeps the springs quietly in place with at desired initial resistance level. The resistance cable108passes through a hole (not shown, but drilled parallel to the paper) in the face of the resistance bracket plate114, which is mounted to the frame32. Crimped or otherwise secured to the resistance cable108is a stop113, which is generally comprised of a metal crimp and a rubber cylinder to quiet any contact with the resistance bracket plate114. When the stop113is rested on the resistance bracket plate114and the pedal58is located in the highest position (as shown inFIGS.4and5), tension is released, minimized, and/or reduced on the pedal side112of the resistance cable108. In this configuration, the resistance side110of the resistance cable108will have a higher tension than the pedal side112of the resistance cable108, due to the resistance bracket plate114bearing the tension when the stop113rests against the resistance bracket plate114. Because the tension on the pedal side112of the resistance cable108is near zero or greatly reduced, the resistance level of the pedal resistance adjustment mechanism101can be easily changed without binding or other difficulties. Optional pulleys116,118are mounted to the frame32and serve to provide a bending point (e.g., a directional change or shift) for the resistance cable108as the resistance level is changed and also serve to change the height of the resistance cable108to match the height of mating components and to avoid abrasion with other portions of the present device30. The end of the resistance cable108may include a ball370, enlarged head, or other attachment means (swaged, brazed, crimped, etc., onto the end of the cable108) which can be captured within the cable hook122, which is much like a modified clevis, comprising a U-shaped metal strip with a longitudinal slot368which provides clearance to permit the cable108to travel through the slot368, but is too narrow to permit the ball370to travel through, thus trapping the end of the resistance cable108to the cable hook122. The cable hook122is attached to two linkage bars124(not to be mistaken with the linkage connected to the resistance source, a cable in this example) through pivoting joint125(only one linkage bar is possible in alternate embodiments). The pivoting joint125is created by inserting the end of the linkage124within the cable hook122, and inserting and securing a pin372through the two linkage bars124the cable hook122, with the pin retained therein by a retaining ring or the like. During assembly, the pin372is also inserted through an arced slot120formed through a resistance plate127, to connect the pivot joint125(and the end of the linkage124and cable108) to the arced slot120, so that travel of the pivot joint125and the proximal end of the linkages124are restricted to the arced slot120, with the pin372riding within the arced slot120with the linkages124on each side of the resistance plate127. The resistance plate127is attached to the pedal arm148by welding, fasteners, or other appropriate attachment means, so that, as the pedal arm148rotates about the pedal axle150the resistance plate127rotates likewise. Transversely welded to the resistance plate127edge, is a bumper plate142, which contacts a bumper stop138at the upper limit of the pedal arm148travel. A limiter plate140140is attached to the frame32to establish the lower limit of the pedal arm148travel. The resistance plate127further includes a resistance setting slot128, although the resistance setting slot128can be formed on another structure connected to the pedal arm148. In this example embodiment, the resistance setting slot128is a linear slot with a series of enlarged portions formed at even or uneven increments along the resistance setting slot128, forming the set holes130,132,134,136, which are created, for example, by drilling through the slot with a bit having a diameter larger than the slot128. The set holes130,132,134,136are each configured to hold in place distal ends376of the linkages124, by selectively receiving a portion of the pull pin assembly126therein to prevent movement of the distal ends376relative to the resistance setting slot128. Looking atFIGS.4A-B, the pull pin assembly126includes a ball354to provide purchase for pulling the pin356as indicated by the arrow357. The pin356is spring biased opposite the arrow357, toward the resistance setting slot128by the spring unit358(internal compression spring not shown). A position set pin360is firmly attached or integral with the pin356. The position set pin360includes a tapered or chamfered tip366, a cylinder locking portion364, and a shoulder362set back from the chamfered tip366, with the cylinder locking portion364between the two, and arranged axially on the pin356. The chamfered tip366acts as a lead-in to guide the set pin360into engagement with the set holes130,132,134,136, when aligned. To change the resistance level applied to the pedal58against the exerciser's effort, the pull pin assembly126with the distal ends376of the linkages124can be moved between set holes130,132,134,136, changing the length of the lever arm. In this example embodiment, it follows that the pull pin assembly126being locked into position at set hole130produces maximum resistance, and being locked into position at set hole136produces minimum resistance. More specifically, to change the resistance setting, the pedal58should be in its highest position (or 1-3 inches nearby), as shown inFIGS.4and5, to release the tension in the resistance cable108. In this position, the pedal arm148does not exert a significant amount of tension on the pedal side112of the resistance cable108, permitting the stop113and the bracket114(or other portion of the frame or part rigidly connected directly or indirectly to the frame) to bear the full load of the resistance. In this way, the pedal side112of the resistance cable108becomes somewhat slack so that the exerciser can easily slide the pull pin assembly126and linkages124up and down the resistance set slot128when pull pin assembly126is actuated (as indicated by arrow154). Looking again atFIG.4A, to activate the pull pin assembly126, the exerciser pulls on the ball126in the direction of arrow357to remove the cylinder locking portion364of the set pin360from the set hole130,132,134,136within which it is initially locked. The cylinder locking portion364is slightly smaller in size than the set holes130,132,134,136, but larger than the resistance set slot128, so that the cylinder locking portion364drops into one of the set holes130,132,134,136and is not permitted to move out. Once the cylinder locking portion364of the set pin360from the set hole (hole130inFIG.4), the pin356is permitted to move within the resistance set slot128, as its diameter is less than the resistance set slot128width. If the exerciser wishes to move from one set hole to the neighboring set hole, she need only to pull the pull pin assembly126to disengage, move the pull pin assembly126slightly out of alignment with the set hole130,132,134, or136, release the pull pin assembly126, where the chamfered tip366rides on the resistance set slot128, allowing the pull pin assembly126to engage automatically when the cylinder locking portion364aligns with the neighboring set hole130,132,134, or136. The exerciser can also continually actuate the pull pin assembly126to slide it to any set hole130,132,134, or136. ComparingFIG.4toFIGS.5-6, it can be seen that the pull pin assembly126is moved from set hole130to set hole134, thus reducing the resistance applied to the pedal arm148, by increasing the lever arm. The resistance from the springs340,342(as shown inFIGS.17,18, and20) is applied to the arced slot120, where the position of the pivot joint125within the arced slot120, in fact, changes the lever arm. The pin372of the pivot joint125is held in positon in the arced slot120by the rigid linkages124being held in positon by the pull pin assembly126being locked in one of the set holes130,132,134, or136as described above. When an exerciser pushes down on the pedal58(as indicated by arrow152inFIG.6), the pivot joint125does not slide relative to the arced slot120, but instead, is held in position between the first end378and the second end380of the arced slot120, as the pedal58is pushed down to pull the resistance cable, as indicated by arrow153. In this example embodiment, the addition of the linkage124moves the pull pin assembly126from deep within the pedal mechanism toward the pedal58, allowing easy and safe access for the exerciser to quickly change the resistance during a routine. Of course, the linkage124and resistance set slot128are optional, as the tension relief provided by the stop113and bracket114do not require any specific resistance set means. In one alternate example, the linkage124and resistance set slot128are eliminated, with the pull pin assembly126positioned at the arced slot120, where the pivot joint125is located, where the arced slot120is modified to include the set holes130,132,134,136. Although, the resistance adjustment system/mechanism is described herein as a pedal resistance adjustment mechanism101, the resistance adjustment mechanism can be connected to a variety of exerciser purchases (e.g., a hand hold, foot hold, etc., and other connected linkages), where the exerciser can change resistance without disconnecting from the resistance source. Looking now atFIGS.7-10, an exemplary embodiment of the tilt adjustment mechanism100is illustrated, which permits the footbar44to tilt or rotate from the direction of the front end88to the direction of the back end90, rotating about the pivot assembly166. In the example embodiment, the footbar44can be held at one of three discrete angular position relative to the frame assembly32, plus a stowed position laying near or at horizontal or, minimally, out of the way. As both sides are generally the identical in concept and operation, only one side of the tilt adjustment mechanism100is described herein. The pivot assembly166, in this example, includes a shaft aligned with the axis of rotation174, and creating a hinge between the pivot support bracket168(attached firmly to the frame by fasteners204) and the sleeve172, using bushings, bearings, ball bearing, or other means of permitting smooth rotation under load. The footbar44generally has a horizontal top tube portion extending laterally across the frame32with two vertical side tubes on each side of the frame32extending downward. In this example embodiment, a collar176secures a rod164at the terminus of the vertical side tube of the footbar44. The rod164telescopically sides into the sleeve172, such that the rod164can axially slide within the sleeve172by pulling upward (as indicated by arrow196) or pushing downward (as indicated by arrow198) on the footbar44. Optionally, a bushing165lines the inner surface of the sleeve172to prevent chatter and looseness in the telescoping connection and to provide a pleasing feel. Referring toFIG.8, the rod164is inserted completely through the sleeve172, with the distal tip180extending into the interior183of the leg43of the frame assembly32. The distal tip180, in one example, is wedge-shaped (tapered on both sides) to permit easy location and insertion into complementary shaped locating notches186,188,190, as will be described further below. A tilt lock plate185is secured to the interior183of the leg43, positioned within the interior by threaded bosses194,195, which act as spacers to located the tilt lock plate185and to receive the fasteners204, tightly securing the tilt lock plate185in the interior183. The locating notches186,188,190are formed on an arc-shaped edge181at the top of the tilt locking plate185. The locating notches186,188,190are generally formed radially from the center of rotation174. At one end of the arc-shaped edge181a protruding of the tilt lock plate185toward the center of rotation174forms a stop192, to limit the clockwise rotation of the footbar44, where the footbar44would be horizontal or nearly horizontal to the side rail42when the distal tip180is engaged against the stop192. A cover plate170is fastened to the leg43to at least partially enclose the interior183. To permit axial sliding of the rod164within the sleeve172over a limited displacement, a limiting slot182is formed through the rod164, which receives therethrough a pin184that is press fit or otherwise secured through the sleeve172at each end, effectively holding the rod164within the sleeve172. The travel of the rod164is limited by the length of the limiting slot182, which permits enough travel to lift the distal tip180from its respective locating notch186,188, or190, as seen inFIG.9. It can be seen that the distal tip180is initially located in locating notch188to hold the foot bar44in a vertical orientation. As the footbar44is lifted up, the distal tip180is lifted out of and clear of the locating notch188, and is ready for repositioning into another locating notch by rotating the footbar44clockwise or counterclockwise, as indicated by arrows198,200. One the distal tip180is aligned with the desired locating notch (190in this example), the foot bar can be pushed down and toward the locating notch190(as indicated by arrow202) to insert the distal tip180into the locating notch190; thus, locking the angular position of the foot bar44. Turning now toFIGS.11-16, an example embodiment of the rope adjustment assembly206is shown in greater detail and isolated from much of the remaining exercise device30.FIGS.11and12illustrate rope adjustment assembly206mounted to the underside of carriage assembly34. The rope adjustment assembly206has an enclosure208supporting the various components on and within the enclosure208. A handle assembly209is positioned on the bottom face234of the enclosure208and connects with an adjustment wheel240positioned within the enclosure208through arced slot236. The purpose of the handle assembly209is to shorten or lengthen all the ropes214,216,218,220connected to the rope adjustment assembly206, but permitting the turning of the adjustment wheel240. The enclosure234includes through holes to receive thumb screws246,247(basically, knurled knobs with a threaded stud), which thread into the underside of the carriage62(screwed into the substructure, such as a threaded insert attached to plywood, oriented strand board, medium density fiber board, etc.). The enclosure can hook to the underside of the carriage62at one side and be attached by the thumb screws246,247on the other, to hold the enclosure208and attached components to the carriage62, yet allow quick removal for inspection or repair. Inspection/access holes244or general openings for other purposes may be punched or cut through the bottom plate234. Looking at the front plate288of the enclosure208, there are four holes providing clearance for each of the four ropes214,216,218,220exiting from the enclosure208. Two further holes in the front plate288of the enclosure208provide clearance for the threaded shafts260,262(discussed further below) to protrude from the enclosure208, with a first adjustment knob210attached to the end of threaded shaft260and a second adjustment knob212attached to the end of threaded shaft262. Although the ankle strap rope mount230is also mounted on the bottom face234and is immediately next to the handle assembly209, the ankle strap rope mount230and any connected rope is not part of the handle assembly209. The ankle strap rope mount230includes an opening231to permit the looped end of a rope (not shown) to be hooked by the ankle strap rope mount230. The opposite end of the rope would be threaded through the foot strap mechanism346illustrated inFIG.2, and include an attachment on the distal end, such as an ankle strap, carabiner, etc. The handle assembly209pivots on a spring pivot assembly256mounted to the bottom face234of the enclosure208, and configured to selectively rotate about the axis232. The handle assembly209includes rotation bracket222shaped like a “T”, with a handle228extending from the stem of the “T” and a pin223extending from the bottom face of the stem toward the bottom plate234. Fasteners242,243insert through holes at each end of the arm of the “T” to fasten the rotation bracket222to the adjustment wheel240mounted on the opposite side of the bottom plate234, with the fasteners accessing the adjustment wheel240through arced slot236. The spring pivot assembly256permits the handle228to be pulled away from the bottom plate234by allowing the rotation bracket222to tilt relative to axis232against the force of the spring292(referring also toFIG.16). As the handle228is tilted and pulled away from the bottom plate234, the pin223is removed from one of the set holes224,225,226,227(set hole226in this example). Once the handle228is lifted sufficiently to remove the pin223from set hole226inFIG.11, the handle228can be rotated about axis232as indicated by arrows238(in a counter clockwise direction), which causes the adjustment wheel240to similarly rotate. The handle228may be continually lifted while being rotated or the pin223can slide across the bottom plate234until reaching the next set hole224,225,226, or227, where the pin223drops into the first set hole224,225,226, or227encountered. In this example, comparingFIG.11toFIG.12, the handle is move from set hole226to set hole224. The result of rotating the handle228will be discussed in greater detail below. Still referring toFIGS.11and12, brackets310,312are fastened to the underside of the carriage62on each back corner, and extend toward the back end90of the exercise device30. The brackets310,312each serve to hold respective strap anchors313, which are sandwiched between the brackets310,312and the underside of the carriage62. The brackets310,312extend toward the back end90and cantilever from the carriage62. The cantilevered portions of the brackets310,312each hold a handle306,308, which may be grasped by hand in certain exercises, or which may be used for other purposes, such as a pulley-like device for wrapping a rope about to change the direction of the rope. Referring now toFIGS.13,14, and15, the rope adjustment assembly206is shown separate from the carriage assembly34. There are two types of rope adjustment provided by the present rope adjustment assembly206, a coarse rope length adjustment and a fine rope length adjustment. Looking first at the coarse rope length adjustment provided by the adjustment wheel240(described partly above as being fastened to the rotation bracket222of the handle assembly209so that both rotate together), one or more of the ropes214,216,218,220(in this illustrated example all the ropes) are configured to wrap about or unwrap from, at least partially, the adjustment wheel240when the handle assembly209is rotated. Looking back atFIGS.11and12, the handle assembly209is shown being rotated counterclockwise (an exemplary direction, from the reader's point of view) to cause the adjustment wheel240to rotate about the same rotation angle (being illustrated as clockwise inFIGS.11and12) and wrap the ropes214,216,218,220about the rope adjustment wheel240to cause all the ropes214,216,218,220to shorten. In other words, the rope length available (e.g., the usable length or the free length) to the exerciser is reduced as the ropes are reeled about the rope adjustment wheel240. Oppositely, when the handle assembly209is rotated clockwise (as viewed fromFIGS.11and12), the ropes214,216,218,220unwrap from the rope adjustment wheel240to lengthen the ropes214,216,218,220, which increases the rope length available to the exerciser. Of course, the direction of rotation (clockwise and counterclockwise) to wrap or unwrap the ropes214,216,218,220is a design choice and may be reversed. Further, although all four ropes214,216,218,220are shown as capable of wrapping about the adjustment wheel240, a lesser number or greater number of ropes may be configured to wrap about the adjustment wheel240. The usable length of all four ropes214,216,218,220are lengthened and shortened simultaneously, as the rotation of the rope adjustment wheel240changes all rope214,216,218,220lengths equally and at the same time. The ropes214,216,218,220may be attached to the rope adjustment wheel240in a variety of ways. In the illustrated example, the rope adjustment wheel240includes rope mount cutouts248,250, which are open ended grooves or other similar features which position the ropes214,216,218,220to wrap about outer diameter255of the rope adjustment wheel240. Rope clamps252,254securely hold the ropes214,216,218,220within the rope mount cutouts248,250, so that the ropes214,216,218,220cannot be pulled free from the rope mount cutouts248,250under normal usage. The ropes214,216,218,220are illustrated in the example ofFIGS.13-15as being two ropes which are folded within the rope mount cutouts248,250to create two ropes apparently extending from the rope adjustment wheel240, which permits the L-shaped or 90 degree rope clamps252,254to more easily hold the folded rope, as the ropes fold about a leg of the rope clamps252,254that extends down into the rope mount cutouts248,250. However, each rope214,216,218,220may be separate from the others in design alternatives. By the exerciser grasping the handle228and rotating or shifting the handle assembly209, the length of all of the ropes can be shortened or lengthened according to the needs of that exerciser. Referring still toFIGS.13-15, the rope adjustment wheel240rotates about the pivot center258, which includes a fastener (e.g., a bolt, threaded stud, etc.) that connects the pivot center258to the spring pivot assembly256. FIGS.13-15additionally illustrate the fine rope length adjustment feature, which is controlled by the manual rotation of the first adjustment knob210and the second adjustment knob212extending from the front plate288of the enclosure208. Fine rope length adjustment is provided by threaded shafts260,262with the adjustment knobs210,212, respectively, attached to the ends of the threaded shafts260,262. The opposite ends of the threaded shafts260,262are supported by shaft mounts268,270, which are plates welded to the enclosure208, with female threads for receiving the male threads of the threaded shafts260,262. The ends of the threaded shafts260,262nearest the adjustment knobs210,212can be simply supported by the clearance holes in the front plate288through which the threaded shafts260,262pass. On each threaded shaft260,262there are two spacers or sleeves slipped or threaded over the threaded shafts260,262. A spacer264,267is positioned over the threaded shafts260,262, respectively, nearest to the shaft mounts268,270. A spacer265,266is positioned over the threaded shafts260,262, respectively, nearest to the adjustment knobs210,212. At least one purpose of the spacers264,265,266,267is to limit the travel of the rope guide tubes272,274, through which the threaded shafts260,262pass perpendicular to the central axis of the rope guide tubes272,274, where the rope guide tubes272,274each include a threaded nut276,278for receiving the threaded shafts260,262threaded therethrough. As the exerciser turns the adjustment knobs210,212the rope guide tubes272,274are permitted to travel along the length of the threaded shafts260,262between the spacers264,265,266,267(where the rope guide tubes272,274move relative to the enclosure208), and are thus limited by the spacers264,265,266,267. During operation, at least two of the ropes214,216,218,220are bent about the rope guide tubes272,274, where, as the rope guide tubes272,274travel toward the shaft mounts268,270, the length of the ropes (in this example, ropes216, and220) are shorted, each independent of the other. As the rope guide tubes272,274travel toward the front plate288, the length of the ropes216,220are shortened, again, each independent of the other. In this way, when one rope becomes slightly longer or shorter than the other (for example, when the handles at the free ends of the ropes do not perfectly align due to the ropes stretching over time), the exerciser can finely adjust the length (from a small fraction of an inch to, perhaps, over several inches) of one or both ropes by turning the associated adjustment knob210or212, until the rope lengths match. FIG.16shows a cross-section of the present rope adjustment assembly206, for more clearly illustrating construction and operation of the spring pivot assembly256. The rotation bracket222is attached by welding to a pivot shaft300, which extends through a center hole of the rope adjustment wheel240, lined with a bushing304so that the rope adjustment wheel240can rotate about the pivot shaft300. A screw295(with washer) captures the rope adjustment wheel240to the pivot shaft300, yet still permits rotation of the rope adjustment wheel240relative to the pivot shaft300. A compression coil spring292is slid over the pivot shaft300above the rotation bracket222, with a screw294(with washer) capturing the spring292on the pivot shaft300between the screw294and the rotation bracket222. In this way, when the exerciser pulls up on the handle228, the spring292is compressed between the screw294(pressing against the washer) and the rotation bracket222to bias the rotation bracket222and the attached handle228back toward the bottom plate234of the enclosure208, causing the pin223to be similarly biased to locate within one of the location holes224,225,226,227. In this view, the actual pin223is hidden from view by a spacer overtop the pin, where the spacer keeps the rotation bracket222separated from the bottom plate234. A cotter pin290can be inserted overtop or through the threaded shafts260,262, acting as a limiter to prevent withdrawal of the threaded shafts260,262from the shaft mounts268,270. Turning now toFIGS.17-20, the jump board assembly315is shown transitioning from the stowed configuration inFIG.17to the deployed configuration inFIGS.19and20. InFIG.17, the jump board assembly315is folded within the frame assembly32of the exercise device30. Specifically, when in the stowed configuration, the jump board assembly315is folded between the frame rails40,42and lower than the frame rails40,42. The jump board assembly315is sufficiently lower than the frame rails40,42to provide clearance for the normal operation of the carriage assembly34as it rolls along the frame rails40,42, and for the normal operation of the rope adjustment assembly206, as well as the springs and other components that operate beneath the carriage assembly34. The jump board frame316is generally a U-shaped tubular steel structure, that rotates about both distal ends at hinges323. A jump board322is rotatably mounted to the jump board frame316through the frame board318. The hinges323permit the carriage assembly34to transition from the jump board322being substantially parallel with the frame rails40,42and carriage62(or with 0-10 or 10-20 degrees of parallel) to the jump board322being substantially planar perpendicular with the frame rails40,42and carriage62(or with 0-10 or 10-20 degrees of parallel). The jump board322includes a frame board318attached firmly to the frame316, where the frame board318is made of a sheet of material such as a plywood, oriented strand board, medium density fiber board, etc. Attached to the frame board318(or, optionally, the frame316) are ring mounts49holding a resistance ring48, which is securely attached to the frame board318so that the jump board assembly315can stowed or deployed with the jump board322, yet removed at any time for exercises with the resistance ring48. The frame board318further includes a pull pin314(which is used to rotate the jump board322, as discussed below) and a pivot330that rotatably connects the jump board322to the frame board318. The jump board322includes a handle320mounted to the back board324for lifting the jump board322and a rotation locking plate334. FIG.18shows the jump board assembly315during the process of deployment, where the footbar44is tilted down, as indicated by arrow315, and front platform46is tilted up, to provide clearance for the jump board frame316and jump board322. With the jump board assembly315tilted up, as indicated by arrow328, one of the bumpers326mounted to the frame cross member can be seen. An additional bumper (no visible) can be positioned on the opposite side of the frame cross member. These bumpers326are designed to prevent metal-to-metal contact between the jump board frame316and to quiet the operation of the jump board assembly315. When the jump board assembly315is tilted up vertically, roller catches327are mounted on each side of the front platform46, and are configured to deflect outwardly against an inward spring bias when the frame316rotates up and pushes the roller catches327outwardly. Once past the rollers of the roller catches327, the frame316is selectively held by the roller catches327, until sufficient force is applied to the frame316to overcome the spring bias in the roller catches327, so that the jump board assembly315can be once again stowed. The rotation of the jump board322, as indicated by arrows332inFIG.19, permit the jump board322portion of the jump board assembly315to rotate ninety degrees to the fully deployed configuration. Since the jump board322is rectangular, the width of the jump board322has a dimension sufficiently narrow to fit between the frame rails40,42. However, if the jump board322were to be simply tilted up, it would be undesirable to exercise with longitudinal sides of the jump board322oriented vertically, as shown inFIG.18, the jump board322would be too narrow for many exercises (although, it is still possible to exercise in this orientation—just undesirable). Thus, the ability of the jump board322to rotate so that the longitudinal sides are parallel to the floor (or other horizontal support surface), enables the compact storage of the jump board322when stowed and the full surface of the jump board322being available to the exerciser when deployed, as the exerciser needs the jump board322as oriented as inFIGS.19and20to provide a wide surface upon which to kick off of with both feet. The rotation lock mechanism333permits the locking of the orientation of the jump board322relative to the frame316. The frame board318is attached to the frame316, with the jump board322rotating on the frame board318about pivot330(a threaded shaft with a bushing or the like). A rotation locking plate334is attached to the back side of the jump board322. The rotation locking plate334supports the mating side of the pivot330, and includes a pull pin314positioned a distance apart from the pivot330, where the pull pin314selectively locks the orientation of the jump board322relative to the frame board318. The rotation locking plate334further includes an arced slot336that receives a guide pin338extending from the frame board318, for limiting the rotation of the jump board322to a predetermined angle, ninety degrees in this example. The pull pin314is mounted on the frame board318, where its pin inserts into one of two holes in the jump board322(one at zero and the other at ninety degrees, with more holes available in alternate embodiments). In use, the exerciser pulls on the pull pin314to retract its pin from the mating hole, rotates the jump board322ninety degrees, where the pin of the pull pin314will drop into the other hole. The handle320can be used to stow, deploy, and rotate the jump board322. Returning the jump board assembly315to the stowed configuration is a simple matter of reversing the above-described steps. FIGS.21-26illustrate an example embodiment of a platform catch assembly382, which selectively holds the front platform46in an upright (e.g., a substantially vertically oriented position, within 10 degrees or within 20 degrees from vertical) and in a flat position (e.g., a substantially horizontally oriented position, within 5 degrees or within 10 degrees from horizontal). When using the front platform46, the exerciser often stands on various areas of the top surface of the front platform46. To prevent accidental tilting of the front platform46while standing near the front edge416, the platform catch assembly382is configured to resist unintended tilting. Moreover, the platform catch assembly382prevents the front platform46from slamming shut when upright. The platform catch assembly382generally comprises a hinge388positioned at or near the structural front edge416(e.g., within 0.5″, or within 1″, or within 2″) of the front platform46to rotatably connect the front platform46to the support bracket400of the platform frame408, thus, allowing the front platform46to pivot about the hinge388and rotate relative to the platform frame408. The fabric covered cushioning may extend slightly beyond the base structure of the front platform408, depending on the density and structural qualities of the internal foam, etc., as it may not produce a torque about the hinge388substantial enough to tilt the front platform46when a weight is applied in this unsupported area. The front platform46is supported atop and fastened to a support plate389, which, in turn, supports the hinge388. The support plate389includes a tab acting as a catch plate390bent at a right angle (or other appropriate angle) to the front platform46and extending downward. Beneath the front platform46a roller bracket386supporting a roller384. The roller bracket386is hinged to the platform frame408by the pivot402(e.g., a hinge). A compression spring406is captured between the roller bracket386and the platform frame408by a bolt404inserted through the spring406and fastened between the roller bracket386and the platform frame408. This permits the roller384to be pushed down by the bottom edge391of the catch plate390, as the front platform46is tilted about hinge388, where the roller bracket386tilts about pivot402against the bias of the spring406. Looking at the operation of the platform catch assembly382, the front platform46in relatedFIGS.21and24, is shown in the horizontally oriented configuration, where the exerciser can use the front platform46in various exercises (i.e., the front platform46is in an active configuration). It can be seen that the roller384and roller bracket386are beneath the front platform46and do not provide any direct spring bias against the catch plate390. RelatedFIGS.22and25show the front platform46in the process of being tilted up about hinge388, as indicated by arrows392and410. Arrows394and412illustrate that the roller bracket386with the roller384are being pushed (tilted) downward by the bottom edge391of the catch plate390acting directly on the roller384. The roller384is free to roll on the roller bracket386, and is made of a tough polymer material, such as DELRIN or the like, to resist wear and provide quiet operation.FIG.25shows that, as the catch plate390pushes on the roller384, the spring406on the opposite side of the pivot402is compressed. Finally, looking at relatedFIGS.23and26, the front platform46is shown in the vertically oriented configuration, where the front platform46is in an inactive configuration, providing clearance for other exercises or access to the various components therebelow, such as fastening or unfastening resistance springs. Although, the configuration is indicated as being vertical or vertically oriented, the hinge388permits the front platform46to rotate slightly past ninety degrees (e.g., five to fifteen degrees greater than ninety degrees) so that the front platform46will remain upright, with the catch plate390resting against a portion of the platform frame408(or connected part) to limit the rotation of the front platform46. Once the catch plate390is pushed past the roller384, the roller bracket386and roller384are pushed back up (toward the hinge388) by the spring406. Once the front platform46has been tilted up, as indicated by arrows396and414, the roller bracket386is permitted to rotate, as indicated by arrow418, so that the roller384returns to its original position, where it does not exert a force on the catch plate390. InFIGS.27-29, the adjustable handle assembly418is shown in the process of being adjusted by turning the handle52and handle bar428about the longitudinal axis440of the handle bar428, to change the orientation of the handle52relative to the remainder of the exercise machine30. For example, the handle52may be oriented parallel or perpendicular to the side rail42, pointed to either lateral side or forward or back. Thus, in the illustrated example embodiment, the handle can be oriented and locked in one of four directions angularly spaced ninety degrees apart. Looking first atFIG.27, the handle52and handle bar428are connected or are constructed of a single bent bar or tube, with the handle52formed by the ninety degree bend in the bar. A foam cover or other cushioning can be slid over the bar of the handle52. A vertical portion of the handle bar428is telescopically inserted into the handle bar post76, and is permitted to rotate and slide axially within the handle bar post76, as both the handle bar post76handle bar428have a circular cross-section. The end of the handle bar428is positioned within the handle bar post76, with an end piece436attached (or formed on) to the end of the handle bar428. The end piece436is generally larger in diameter than the handle bar428, which creates a shoulder437that protrudes above the outer surface of the handle bar428. A plurality of pin receivers438,438′,438′,438′″ are formed on the distal end of the end piece436, In this example embodiment, the pin receivers438,438′,438′,438′″ is comprised of two intersecting grooves formed on the distal end of the end piece436. Alternatively, there may be other structures that perform a similar function, such as a plurality of notches or the like formed in a radial pattern on the distal end of the end piece436. The pin receivers438,438′,438′,438′″ are configured to each selectively receive the pin422of the pull pin420. Because the pin receivers438,438′,438′,438′″ are formed by two grooves intersecting at ninety degrees, movement of the handle bar428from one pin receiver to the adjacent pin receiver moves the handle bar428angularly by ninety degrees. Still looking atFIG.27, the handle52and handle bar428are shown raised configuration (versus the lowered configuration shown inFIG.1, with the handle52in its lowest position, where the collar435is adjacent to or touching the busing434capping the opening of the handle bar post76) and in a first position where the handle bar428is oriented to position the pin422within pin receiver438″. To change angular position, the handle bar428is lifted upwards, as indicated by arrow442, to lift the pin receiver438″ above the pin422. To prevent the withdrawal of the handle bar428from the handle bar post76, a stop432is positioned or formed on the inner diameter of the handle bar post76. In this example, the stop432is a sleeve that is fastened or spot welded to the inner diameter of the handle bar post76. The sleeve provides clearance so that the handle bar428can freely move up and down, yet provides a stop to prevent the handle bar428from being removed from the handle bar post76during adjustment. As the handle bar428is lifted, the shoulder437of the end piece436contacts the stop432. Since the diameter of the shoulder437is larger than the inner diameter of the stop432(e.g., the sleeve), the stop432does not permit the handle bar428to be lifted further. Although the stop432is shown as a sleeve, there are many operable configurations, such as a protrusion created by stamping a dimple on the handle bar post76which protrudes into the inner diameter or other known technique to restrict the inner diameter of the handle bar post76. Turning now toFIG.28, the handle52can be seen being turned from the right to the left, as indicated by arrow444. The exerciser simply turns the handle52until the desired angular orientation is reached, and the pin receiver aligned with the pin422receives the pin422and locks the angular position. In this example, referring also toFIG.29, the handle52is rotated ninety degrees to reposition the handle bar428from pin receiver438″ to pin receiver438. Once aligned with pin receiver438, the handle bar428drops down, as indicated by arrow446, to position the pin receiver438top the pin422; thus, locking the handle52and handle bar428in a new angular position. The exerciser can move the handle bar428from the raised position to the lowered position (i.e., changing the height of the handles52) by pulling the pull pin420to retract the pin422, providing clearance for the end piece436to pass the pin422and drop to the bottom426of the handle bar post76, where one or more of the pin receivers438,438′,438′,438′″ engages the lower pin424to similarly lock the angular position of the handle bar428in ninety degree increments (see alsoFIG.1). The lower pin424generally is secured to the bottom426of the handle bar post76, spanning the inner diameter. In a manner very similar to the upper pin422, the angular position of the handle52can be changed by lifting the handle52and repositioning the handle bar428until the lower pin424is engaged within one or more pin receivers438,438′,438′,438″. The shoulder437includes a chamfered upper edge for permitting the handle bar428to transition from the lowered position to the raised position without manually pulling on the pull pin420. As the handle bar428is pulled up, the chamfered upper edge of the shoulder437of the end piece436strikes the pin422of the pull pin420, where the chamfered edge (or other slanted or rounded edge) pushes against the pin422, pushing the pin422into the pull pin420assembly, permitting the end piece436to pass the pin422. As soon as the end piece436passes the pin422, the spring loaded pin422immediately extends back into the interior of the handle bar post76to block the downward movement of the handle bar428. In this way, the exerciser can quickly transition and lock the handle52from the lowered position to the raised position, without having to operate the pull pin420. Often it is difficult for exercise studio staff and delivery staff to bring fully assembled exercise machines into a studio, as the assembled machine is heavy, bulky, long, and generally difficult to manipulate through tight corners and through stairs, etc. Yet, a disassembled machine is equally difficult for staff to assemble in place, as there are numerous parts and tight tolerances.FIG.30(and also referencingFIG.4) illustrates a novel means to easily ship and carry the present exercise device30, and easily assemble it at the studio. As discussed above, the exercise machine30is divided into assemblies (or sub-assemblies), primarily comprising the front end assembly38, the back end assembly36, the carriage assembly34(which can, optionally, include the rope length adjustment assembly96. The side rails40,42and other miscellaneous parts can be packaged together or in separate boxes, as packaging requirements dictate. The mating faces448,450,452,454,456,458,460(and one hidden face) of the separate assemblies create a point where two mating assemblies can be fastened together easily, For example, mating face450of the front end assembly38is brought into alignment with the mating face448of the side rail40. As seen inFIG.4, fasteners106(three nut and bolt pairs in this example) can be tightened to a specified torque to fasten the front end assembly38to the side rail40, to create joint104. All the mating surfaces are similarly fastened to create the fully assembled exercise device30.
51,245
11857820
DETAILED DESCRIPTION OF THE INVENTION In various embodiments described in enabling detail herein, the inventor provides a unique isometric force resistive exercise tool that enables working various groups of upper body muscles at graduating force resistance levels by using a single hand-operated adjustment interface. A goal of the invention is to provide a resistance tool that may be used work the hands, wrists, fingers, arms, and shoulders without device modifications. It is a further goal of the present invention to provide a method and apparatus of resistive force adjustment of an isometric exercise device that enables smooth granular graduation on a scale from little to no force resistance level to a maximum achievable force resistance level. A further goal of the present invention is to provide an isometric exercise device for working the various muscle groups described above that contains durable components resistive to wear and weathering. The present invention is described using the following examples, which may describe more than one relevant embodiment of the present invention. FIG.1Ais a side-elevation view of a grip and twist isometric force resistance device100according to an embodiment of the present invention. Grip and twist isometric resistance device100is a force resistant assembly comprising four basic components assembled to form. Isometric resistance device100may be referred to hereinafter in this specification as a twister grip assembly100. Twister grip assembly100is adapted as an elongated annular form and includes a left grip handle form101. Handle form101has a tapered and substantially hollowed body having a materially contiguous annular grip knob108at one end and a concentric piston form (not visible) at the opposite end. Grip twist assembly100includes a right handle form102having a like tapered and elongated, substantially hollow body as handle form101. Right handle form102includes a materially contiguous ring housing103open at the free end thereof and adapted to receive the piston form of left grip handle form101in a slip fit and concentric relationship. Left handle form101and right handle form102may be fabricated of a durable lightweight aluminum and crafted into form by machining process. Left handle form101and right handle form102are held together in assembly by a threaded longitudinal axle shaft (not visible). Right handle form102is open at the end opposite ring housing103and is adapted to receive the stem portion105of a force resistance adjustment knob104in a slip fit and concentric relationship. Adjustment knob104has a hollowed interior (bore not visible) including interior threading at the end of stem105that may be threaded onto the end of the axle shaft holding left handle form101to right handle form102. Adjustment knob104and stem105are in this embodiment, materially contiguous and like the handle forms may be fabricated from a durable lightweight aluminum crafted into form by machine process. Although not visible in this view, the axle shaft extends through a central opening in the piston form of the left handle form and is welded to an annular pressure plate that is somewhat larger in diameter than the diameter of the opening at center of the piston form. Grip twist assembly100may include friction-resistive materials disposed within ring housing103and within the bore of left handle form101ahead of the pressure plate (internal components not visible). Twist grip assembly100includes a surface knurling106in this embodiment to aid in a no slip grip of the respective handle forms101and102. The tapered handle forms101and102when assembled present an opposing taper down having the largest diameter at ring housing103and tapering down to left handle grip knob at the end of handle form101and to the stem (105) receiving end of handle form102. Adjustment handle104may interface with a pair of industrial springs placed over the axle shaft and contained in the hollow longitudinal bore within handle form102along with a polymer sleeve and a polymer washer serving as a spring compression stop. In full assembly, left handle form101and right handle form102may be rotated against friction force that is fully adjustable by threading on or threading off adjustment handle104relative to the axle shaft. Stem105of adjustment handle104may include three or more annular grooves referred to herein as gauge rings107. Gauge rings107may be equally spaced apart and the distance between each ring-to-ring may represent a threading travel distance relative to adjustment handle104being advanced over the external trading of the axle shaft. In this embodiment, an operator may turn adjustment handle clockwise to increase back pressure of a piston form face and the face of the pressure plate against the friction-resistive materials fixedly disposed at the bottoms of respective bores in each handle form. Turning adjustment handle104counterclockwise reduces back pressure against the resistive materials alluding to decompression of the industrial springs inside the assembly. An operator may grip the respective handle forms and may rotate them in opposite directions against a previously set resistance level visible by the travel distance of adjustment handle stem105into the receiving end of handle form102. The opposing taper or conical profile of the assembled handle forms provides a comfortable grip with gloves or bare hands. In use, an operator my set a resistance force using the adjustment handle104and perform repetitive grip and twist motions against the resistive friction force created by the back pressure urged by spring compression against a stop. In this embodiment, a user may make unlimited rotations in a same direction, on either the right or left side of the device. This is a marked improvement over devices known in the art, as most are limited to a single rotation in any direction before having to rotate in the opposite direction. The operator may vary the held position of twist grip tool or assembly100, for example working it horizontal to the operator's stance or vertical to the operator's stance encompassing the shoulder muscle group as well as the forearms, biceps, wrists, and hands. An operator may start at a previously set level of small force resistance and adjust the tool to a next level of force resistance between repetitions. Adjustment handle104enables micro-granular levels of force resistance from zero to maximum force resistance where the industrial springs are at full compression (designed amount), which proportionally increases the friction resistance against the resistive materials within the tool. In one embodiment, twist grip assembly may be manufactured for different levels of strength by selecting a gauge for the industrial springs and or shortening the length that the springs might be compressed. FIG.1Bis a right-end view of grip and twist device100ofFIG.1A. Grip twist assembly100is an annular form on this embodiment. An annular form is a preferred embodiment for both manufacturing and for ergonomic operation of the device. However, this should not be construed as a limitation to the practice of the present invention. The outer shell of the grip and twist assembly100may be shaped in other geometric forms without departing from the spirit and scope of the present invention. Ring housing103has the largest diameter of the grip twist assembly at approximately three inches followed by the left handle grip knob108having approximately a two and three-eights-inch diameter, which is the same diameter in this example as the highest point of right handle form102. Handle stem105is the smallest diameter of the outwardly visible features of grip twist assembly100at approximately one and one-eight inches in diameter. All of the visible forms of grip twist assemble100are held in concentric relationships including the internal components described in more detail below. FIG.2is a one-half-sectioned view of grip and twist device100ofFIG.1A. Grip twist assembly100sectioned, depicts right handle form102receiving a piston form210of left handle form101within the internal space of ring housing103of handle form102. Ring housing103may be a modular part, in one embodiment, that is fixed to handle form101, or welded to handle form102. Ring housing103may be materially contiguous with handle form102in a preferred embodiment. A friction resistive material211is provided and is disposed at the bottom of ring housing103. Friction resistive material212may be in the form of a rough fibrous material like nylon rope material or a solid form of a material that has frictional resistive properties. A centrally disposed axle shaft201is provided to hold the handle forms together in an assembly. Axle shaft201extends from a threaded connection to adjustment handle104(connected at stem105) through a central bore opening provided through center of the solid material features of the handle forms and into a lager bore space200that bottoms out some distance behind solid piston form210that interfaces with ring housing103. A smaller amount of a friction-resistive material209may be fixedly disposed around the bottom of bore200in the form of a ring of friction resistive fibrous material or solid form. In a preferred embodiment, resistive material211and resistive material209are the same material. However, that should not be construed as a limitation of the present invention. Axle shaft201extends through a disc form pressure plate208and may be welded to a backside of pressure plate208to stabilize the plate. Pressure plate208may be a disc form with an internal threading that may be threaded over axle shaft201to a position on the threads and then welded thereto. Bore200may be capped at the end of Left handle form101using a plastic cap that may be snapped into the diameter of the bore. Similarly, a plastic end cap may be provided to cap the opposite center-bored end of the grip twist assembly100at the end of adjustment handle104. In this embodiment, a catch pin212is provided and pressed through axle shaft201presenting orthogonally to the longitudinal axis of axle shaft201. Catch pin212may be welded into place and has a length longer that the diameter of axle shaft201extending beyond the shaft on opposite sides. A catch pins slot is provided at the bottom of ring housing103by machine process to a depth into the center opening for the axle shaft and of a length to fully secure the length of catch pin212. Catch pin slot213may capture catch pin212in order to secure the catch pin therein on both sides of the shaft and therefore lock axle shaft201to right handle form102in correct assembly of grip twist tool100preventing handle form102from rotating about axle shaft201. A large diameter industrial spring204is provided and placed over axle shaft201and is contained within a center bore placed into right handle form102and bottoming out some distance before ring housing103. The center bore in right handle form102may be the same diameter of bore200in the left handle form101. A polyvinyl chloride (PVC) or nylon sleeve202is provided as a bore space filler material or spacer enabling more material to be removed from handle form102to reduce material weight in line with handle form101and center bore200. Smaller diameter industrial spring205may be placed over axle shaft201against flat nylon washer203abutted against the forward rim of nylon sleeve202. Larger diameter industrial spring204may be placed over both the axle shaft201and the smaller diameter spring205abutting against the same nylon washer203. In one embodiment, smaller diameter industrial spring205is longer than larger diameter spring204and during force resistance adjustment, may be the first spring compressed for a specific distance before both springs are compressed. The open face of adjustment handle stem105abuts one of pair of steel washers207placed over axle shaft201and sandwiching a flat bearing disc206. Industrial springs204and205may abut the first steel washer207with the smaller spring205being compressed against the washer before the larger spring204contacts the washer. In this view, both larger spring204and smaller spring205are in a state of compression due to clockwise advancement of adjustment handle104. In general, use of grip twist assembly100involves adjusting the level of force resistance characterized herein as an adjustable level of a resistive state of the assembly relative to force required to grip and twist the left and right handle forms in opposite directions. Adjustment handle104may be turned clockwise to increase this level of force resistance, or counterclockwise to reduce the level of force resistance. Placing the industrial springs204and205under compression using the adjustment handle104to advance over axle shaft201causes piston form210of left handle form101to compress against friction-resistive material211. At the same time, pressure plate208compresses against friction-resistive material209requiring more twist force to twist the respective handle forms relative to one another. FIG.3Ais an elevation view of axle shaft201of grip twist assembly100ofFIG.2. Axle shaft201has a section thereof threaded externally with threads302extending a distance from the end of the shaft inward. Threads302match female threading provided in the stem105of adjustment handle105(seeFIG.2). Catch pin212extends through axle shaft201and extends in pin length past the diameter of the axle shaft on both sides of the shaft. In one embodiment, axle shaft201includes an external thread pattern303at the end opposite the adjustment handle. In this embodiment, pressure plate208may have a female matching thread pattern and may be threaded onto the end of axle shaft201before being welded thereto by applying a weld cap301via a welding process. Although friction-resistive material209is depicted on axle shaft201adjacent to and abutting pressure plate208, the depiction is logical only. In actual practice the resistive material209is disposed at the bottom of the center bore space200of the left handle form101. In one embodiment, friction-resistive material209may be placed in a relative shallow counter bore placed at the bottom center of the bore space and fixed therein by gluing the material in place for example. In dissemblance of the grip twist assembly, axle shaft201is removed from the left handle form without friction-resistive materials209separating from the left handle form. FIG.3Bis a left-end view of axle shaft201ofFIG.3A. Pressure plate208may be about one and one-eight inches in diameter and fits into the central bore (200,FIG.2) with a concentric tolerance of about a tenth of an inch between the edge of the pressure plate and the inside diameter of the bore space. Resistive material209may take up all of the bore diameter and is not illustrated in this view. Weld cap301may simply be two opposite tack welds holding pressure plate208to axle shaft201at the advanced position on external threads303. An operator may remove axle shaft201from the left handle form by completely detaching the adjustment handle from the opposite end. It may then be pulled out of the handle form through the open end of the bore space. A plastic cap may be provided to hide the open bore. Pressure plate208may be fabricated from steel or aluminum alloy without departing from the spirit and scope of the invention. FIG.4is a block diagram depicting mechanics of setting force resistance for grip and twist assembly100ofFIG.1A. Block diagram400depicts logical representations of the components of grip twist assembly100. Starting with a level of no resistance, an operator may turn adjustment handle104clockwise in the direction of the arrow to increase the force resistance of the assembly. Gauge rings107define a general distance A that adjustment handle104may travel on the external threads of axle shaft201. The industrial spring set introduced and described further above (FIG.2springs104,105) is represented logically herein as spring set401(broken boundary). Distance A is roughly equal to a distance A representing a compression distance against spring set401. Spring set401may be compressed against a bearing component402(analogous to washers207and bearing plate206ofFIG.2, nylon washer203, and nylon sleeve202placed in the bore of the right handle form (102not depicted). A hard stop (HS) represents the bottom of the center bore. Bearing component402enables adjustment handle104to be turned easily with the fame force used as compression against spring set401is increased. Distance B may represent the shortened length of spring set401in a maximum state of compression. Any state of compression of spring set401is translated to axle shaft201and causes equal pressure (EP) of pressure plate208acting against resistive material209disposed at the bottom of the center bore in the left handle form (FIG.2, handle form101, bore space200). Likewise, piston form210of the left handle form is caused to exert a proportional amount of pressure (P) against friction resistive material211disposed at the bottom of the ring housing of the right handle form (FIG.2, handle form102, ring housing103) against a hard stop (HS) representing the bottom surface of the ring housing. The amount of force resistance set for the grip twist assembly references the level of twist resistance created by adjustment handle104compressing spring set401any amount along adjustment handle travel distance A translating to compression distance A in spring set401. The level of force resistance created by turning adjustment handle104clockwise may depend somewhat upon the selected gauges of the springs in spring set401and somewhat on the friction resistive attribute of the selected friction-resistive material(s) chosen for the assembly. In one embodiment, adjustment handle104may be temporarily locked in place on the external thread pattern of axle shaft201with a handle turn-lock mechanism (not illustrated) to prevent an undesired change in force-resistant level set by the adjustment handle while working the grip twist assembly. One with skill in the arts will recognize that the outer handle forms of a grip twist assembly like assembly100may be designed differently and that the overall length attribute of such an assembly may be different and further, that the overall amount of force resistance an assembly is capable of may be derived in part by materials selection of a spring set, selection of the resistive materials used, and in part by the travel/compression distance afforded in the adjustment handle relative to the axle shaft thread pattern length that may be navigated. Therefore, the grip twist isometric fore resistant exercise tool of the present invention may be provided in different models or designs with differentiating levels of capability relative to force resistance. Design metrics may include changing length of handle forms, changing diameter and taper metrics of handle forms, changing surface metrics of handle forms with respect to operator grip metrics, and so on. FIG.5is a side-elevation view of a grip and twist isometric force-resistance assembly500according to another embodiment of the invention. Grip and twist assembly500includes a left handle form501and a right handle form502with a ring housing503. Adjustment handle504, including handle stem105and ring gauge grooves507are analogous to counterparts ofFIG.1A. Left handle grip knob508is roughly the same diameter as adjustment handle104. The general profile is a straight non-tapered profile and grip material509like polyurethane sleeves may be utilized for grip metrics over a knurled grip surface. Grip material509may also cover grip knob508and adjustment handle504, for example neoprene “no-mark” rubber. FIG.6is a perspective view of a grip and twist isometric force-resistance assembly600according to a further variant embodiment. Grip and twist assembly600includes a left handle form601and a right handle form602, the handled forms spaced apart over the axle shaft by a spacer disc or disc set (not visible). I this variant embodiment there is no ring housing on the right handle form and no piston form on the left handle form to interface. In this embodiment, there may be tandem pressure plate interfaces for the left handle form and the right handle form the pressure plates and friction resistive materials hidden entirely within the respective handle forms. Adjustment handle604including adjustment handle stem605may be analogous with handle104and handle stem105ofFIG.1A. In this view, ring gauge grooves are not depicted on handle stem605but may be assumed present in some embodiments. In this variant design, the overall length of the grip twist assembly600is significantly shorter in overall length than other depicted designs focusing the operator on a shorter placement of the hands closer together when exercising with the tool. Assembly600may be a product of a straight handle design with no taper, the handle forms generally being larger diameter forms than with other device models. In this design different muscle groups may be worked as a result of the much shorter design length and perhaps larger diameter handle forms. The outer surface of adjustment handle604may be knurled for improving grip. In this embodiment, a different grip enhancing pattern may be leveraged in substantially parallel ridges606provided in the outside surfaces of the handle forms over a section of or over all of the form surfaces. In this embodiment, the ring housing may have the same outside diameter has the right handle form602and may not be discernible from a vantage point the outside of the handle form. The piston form of left handle form601may also be sized in diameter to fit inside the ring housing on the right handle form. Friction resistance material may be disposed at the back of the ring housing on the right handle form. The pressure plate and friction resistance material in the left handle form may be analogous to that described inFIG.2where in this case left handle form601includes a center bore at a similar or at a larger diameter. FIG.7is a side-elevation view of a grip and twist isometric resistance assembly according to a further variant embodiment. Grip and twist assembly700includes a left handle form701and a right handle form702with a ring housing703. Adjustment handle504, including handle stem105and ring gauge grooves507are analogous to counterparts ofFIG.1A. In this embodiment left handle form701has no grip knob. In this straight design, all of the annular components have the same uniform outside diameter with the exception of ring housing703having a larger diameter. The general profile of grip twist assembly701is a straight non-tapered profile. Left handle form701, right handle form702, and adjustment handle knob704all include a knurl pattern to aid in a slip resistant grip by the operator. Ring housing703is a contiguous extension of right handle form702and receives a piston form (not visible) contiguous to the left handle form701. Adjustment handle704including handle stem705and ring gauge grooves707are analogous to the descriptions of counterpart elements described in reference toFIG.1A. In this view, design radial grooves708are provided around the outer diameter surface of ring housing703for aesthetic purposes. It will be apparent with skill in the art that the grip twist isometric workout tool of the present invention may be provided using some or all the elements described herein. The arrangement of elements and functionality thereof relative to the invention is described in different embodiments each of which is an implementation of the present invention. While the uses and methods are described in enabling detail herein, it is to be noted that many alterations could be made in such details of construction or design and arrangement of the elements without departing from the spirit and scope of the present invention. The present invention is limited only by the breadth of the claims below.
24,036
11857821
DETAILED DESCRIPTION Before the present disclosure is described in greater detail, it should be noted herein that like elements are denoted by the same reference numerals throughout the disclosure. Referring toFIGS.1to6, a resistance training machine according to the first embodiment of the present disclosure is shown to comprise a base1, an operating device3, a resistance device4, and a meter device5. The resistance training machine can be provided with or without a seat2according to the actual requirement. In this embodiment, the seat2is disposed on the base1, and includes a seat portion21and a backrest portion22connected as one body. However, in other embodiments, to match the different resistance training methods, the seat portion21and the backrest portion22may be separately adjustable, or the seat2may only include the seat portion21. The structure of the seat2is not limited to the disclosed embodiment. Still in other embodiments, if the resistance training machine is not provided with the seat2, training is provided in a standing position only. The operating device3includes a foot plate assembly31and a handle assembly32that are disposed on the base1and that are spaced apart from and opposite to each other. When a user (not shown) is seated on the seat2, he can use his feet to push the foot plate assembly31away from the seat2. The handle assembly32includes two handles321located on two opposite sides of the seat2. When the user is seated on the seat2, he can use his hands to push the handles321away from the seat2. It should be noted that, if the resistance training machine is not provided with the seat2, the foot plate assembly31may be pushed backward by the user in a standing position, and the handles321may be pushed upward and downward, forward and rearward, or toward and away from each other by the user. The operating device3may further include a leg rest assembly (not shown) for the lower legs of the user to push up or down. As long as the user can reciprocate the operation, any form of resistance training falls within the scope of this disclosure. With reference toFIGS.3to6, in combination withFIG.1, the resistance device4includes a support plate6fixed to the base1, a first mounting seat7and a second mounting seat8fixed to the support plate6and adjacent to each other, a first rotating assembly41, a second rotating assembly42, a drive unit43, a transmission unit44, a sensing unit45, a control unit46, and a heat dissipating fan47. The first rotating assembly41is rotatably mounted on the first mounting seat7for rotation about an axis (L), and includes an annular ring411, a first rotating member412, an end plate413, a connecting plate416, and a shaft418. The annular ring411surrounds the axis (L), and is made of a magnetic conductive material, such as iron. The annular ring411has an outer peripheral edge connected to an outer periphery of the end plate413. In this embodiment, the first rotating member412is a rotating ring that has an outer peripheral edge connected to the end plate413, that has an outer peripheral surface abutting against an inner peripheral surface of the annular ring411, and that is made of a conductive material, such as aluminum, copper, aluminum alloy or copper alloy. The end plate413and the annular ring411cooperatively define a receiving space410. The connecting plate416is connected to the end plate413at a side opposite to the annular ring411and the first rotating member or ring412. The shaft418is connected to and extends outwardly from the connecting plate416along the axis (L), and is rotatably inserted through a top portion of the first mounting seat7for connection with the drive unit43. The second rotating assembly42is mounted on the second mounting seat8, and includes a second rotating member421, a plurality of magnets422, an end plate424, and a shaft425. In this embodiment, the second rotating member421is a rotating ring that has an outer peripheral edge connected to an outer periphery of the end plate424, that is disposed in the receiving space410, and that is not connected to the end plate413. Further, the second rotating member or ring421is spaced apart from the first rotating ring412. The magnets422of this embodiment are disposed on an outer peripheral surface of the second rotating ring421, and are arranged thereon at intervals around the axis (L). N and S poles of the magnets422are alternately arranged around the outer peripheral surface of the second rotating ring421, and face an inner peripheral surface of the first rotating ring412. Each magnet422is spaced apart from the first rotating ring412by a radial gap of, for example, 2 mm. The magnets422are strong magnets, for example, neodymium (NdFeB) magnets. The number of the magnets422used in this embodiment is twelve, but is not limited thereto, and may be increased or decreased according to the actual requirements. The shaft425extends inwardly from the end plate424along the axis (L), and is rotatably inserted through a pair of aligned holes81in the second mounting seat8for rotation about the axis (L). In other embodiments, the position of the annular ring411, the first rotating ring412and the second rotating ring421may be interchanged. That is, the second rotating ring421is located on the outermost side, and is subsequently followed by the first rotating ring412and the annular ring411. At this time, the second rotating ring421surrounds an outer peripheral surface of the first rotating ring412with an inner peripheral surface thereof facing the outer peripheral surface of the first rotating ring412, the magnets422are disposed on the inner peripheral surface of the second rotating ring421, and the first rotating ring412abuts against an outer peripheral surface of the annular ring411. The drive unit43may be connected to the first or second rotating assembly41,42. In this embodiment, the drive unit43connected to the first rotating assembly41will be described herein. The drive unit43includes a motor431mounted on the support plate6, a driven wheel432fixed to the shaft418, and a belt433wrapped around a pulley of the motor431and the driven wheel432. The motor431serves to rotate the driven wheel432through the belt433. The shaft418rotates together with the driven wheel432, and drives the annular ring411, the first rotating ring412and the end plate413to rotate therewith. The drive unit43is configured to receive a control signal, and is configured to drive rotation of the first rotating assembly41about the axis (L) according to the received control signal. During rotation of the first rotating assembly41relative to the second rotating assembly42, eddy currents are generated in the first rotating ring412through the relative rotation of the conductive first rotating ring412and the magnets422. Since the magnetic field generated by the eddy currents will cause the magnetic field generated by the magnets422to change, a tangential component force opposite to a moving direction is generated. Through this, a mutual braking resistance between the first and second rotating assemblies41,42is created. It should be noted that the eddy currents are generated by the fact that the magnetic field lines of the magnets422are cut when the conductive first rotating ring412is rotated. In this embodiment, since the annular ring411is a magnetic conductor and is located on a side of the first rotating ring412opposite to the magnets422, it can guide the magnetic field lines of the magnets422to concentrate and pass through the first rotating ring412, and improve the effect of cutting the magnetic field lines of the magnets422when the first rotating ring412is rotated, thereby increasing the magnitude of the eddy currents and the resistance generated. Alternatively, the first rotating assembly41may not include the annular ring411, and can still achieve the effect of generating eddy currents and resistance. With reference toFIGS.2to5, the transmission unit44may be connected to the first or second rotating assembly41,42. In this embodiment, the transmission unit44connected to the second rotating assembly42will be described herein. In other embodiments, the positions of the first and second rotating assemblies41,42may be interchanged such that the first rotating assembly41is connected to the transmission unit44, while the second rotating assembly42is connected to the drive unit43. The transmission unit44includes a transmission wheel441sleeved fixedly on the shaft425, a transmission belt442wound on the transmission wheel441, and a restoring member443connected to the transmission wheel441. The transmission belt442has one end connected to the transmission wheel441, and the other end connected to the foot plate assembly31and the handle assembly32of the operating device3and can be pulled out of the transmission wheel441by the foot plate assembly31or the handle assembly32so as to drive the transmission wheel441to rotate, which in turn, drives the second rotating assembly42to rotate therewith. The restoring member443provides a restoring force to drive the transmission wheel441to wind back the transmission belt442. The restoring member443may be a volute spiral spring. When the user pushes the foot plate assembly31or the handle assembly32away from the seat2, the restoring member443stores a restoring force for moving the foot plate assembly31or the handle assembly32close to the seat2via the transmission belt442. The user needs only to stop applying force after pushing the foot plate assembly31or the handle assembly32to release the stored restoring force of the restoring member443, and the foot plate assembly31or the handle assembly32will automatically move close to the seat2through the restoring member443. Hence, the foot plate assembly31or the handle assembly32can be repeatedly moved away from and close to the seat2through the restoring member443. The sensing unit45includes a light interrupting disc451fixedly connected to the shaft425of the second rotating assembly42, and a photo interrupter fixed to a top portion of the second mounting seat8and corresponding to the light interrupting disc451. The light interrupting disc451rotates together with the shaft425when the shaft425is driven by the transmission unit44to rotate, so that the light interrupting disc451is coaxial with the transmission wheel441and rotates at the same speed with the same. The photo interrupter452is used for sensing the rotation of the light interrupting disc451to know the pulling length, the speed and the number of times of repeated pulling of the transmission belt442. In other embodiments, the light interrupting disc451may have an axis or speed different from that of the transmission wheel441, and may rotate relative to the transmission wheel441at a predetermined speed ratio through a set of connecting elements (not shown, for example, several gears). In such case, the sensed amount of rotation of the light interrupting disc451can be obtained according to the predetermined rotation speed ratio, and the pulling length of the transmission belt442and the speed and the number of times of repeated pulling of the transmission belt442can be calculated. The sensing unit45further includes a speed sensor453(seeFIG.4) fixed to the first mounting seat7and proximate to the drive unit43for sensing the rotational speed of the driven wheel432of the drive unit43. The control unit46is disposed on the support plate6, and is communicably connected to the drive unit43for sending a control signal thereto. Through this, the rotational speed of the drive unit43for driving rotation of the first rotating assembly41can be controlled and adjusted, thereby adjusting the magnitude of the eddy current resistance between the first and second rotating assemblies41,42. It should be noted herein that, because the rotational speed of the first rotating assembly41driven by the drive unit43is higher than the rotational speed of the second rotating assembly42driven by manpower, the magnitude of resistance is mainly determined by the rotational speed of the first rotating assembly41, and is mainly controlled by the control unit46. Through this, only by setting the operating program of the control unit46, the rotational speed of the first rotating assembly41can be adjusted according to the user's setting to change the magnitude of the resistance. The heat dissipating fan47is disposed on the support plate6in proximity to the first and second rotating assemblies41,42for dissipating heat generated by the same. When the first and second rotating assemblies41,42rotate at a relatively high speed relative to each other, they will generate heat, so that the magnetic force of the magnets422will be reduced, thereby affecting the generation of the eddy currents and the magnitude of resistance. Therefore, through the provision of the heat dissipating fan47, heat dissipation can be enhanced to help maintain a stable resistance. The meter device5is communicably connected to the photo interrupter452and the speed sensor453, and is used to display sensing information for the user's reference, for example, the information obtained by the photo interrupter452by sensing the amount of rotation of the light interrupting disc451, such as the pulling length, the speed and the repeated pulling times of the transmission belt442, or the rotational speed information sensed by the speed sensor453. The information displayed on the meter device5can be used as a reference for the user during resistance training, so that the convenience of use can be improved, and at the same time, a suitable training plan can be made through the above information, thereby improving the effectiveness of training. Additionally, in this embodiment, the control signal sent by the control unit46and the related information sensed by the sensing unit45are both transmitted and integrated through digital signals. However, in other embodiments, the control unit46may be integrated with the meter device5, for example, through a control panel (not shown) for the user to monitor the information and adjust the resistance, or by communicably connecting the control unit46and the sensing unit45to a remote software program which may be, for example, a mobile phone APP, so that functions, such as monitoring of the information and adjusting of the resistance, may be performed through a mobile phone, thereby further improving the convenience of use of this embodiment. Referring toFIGS.7to9, the second embodiment of the resistance training machine of this disclosure is substantially identical to the first embodiment. Particularly, the resistance training machine includes the base1, the seat2, the operating device3, the resistance device4, and the meter device5. However, the end plate413(seeFIGS.3and5) of the first rotating assembly41of the resistance device4of the first embodiment is replaced with a magnetic conductive disc414in this embodiment, and the first rotating member412′ of this embodiment is a conductive disc made of a material, such as aluminum, copper, aluminum alloy or copper alloy. The first rotating member or conductive disc412′ has a first surface and a second surface opposite to each other along the axis (L). The magnetic conductive disc414has one side connected to the connecting plate416, and the other opposite side adhered to the first surface of the conductive disc412′. The magnetic conductive disc414can be made of iron. Further, in this embodiment, the second rotating member421′ of the second rotating assembly42is a disc-shaped magnet holder having a surface that faces the second surface of the conductive disc412′ and that is formed with a plurality of angularly spaced-apart circular grooves420surrounding the axis (L). The magnets422are respectively disposed in the circular grooves420. The N and S poles of the magnets422are alternately arranged around the surface of the second rotating ring or magnet holder421′, and face the conductive disc412′. Each magnet422is spaced apart from the conductive disc412′ by a radial gap of, for example, 2 mm. The drive unit43can similarly drive the first rotating assembly41to rotate about the axis (L) and generate eddy currents in the conductive disc412′ through the relative rotation of the conductive disc412′ and the magnets422. Hence, through the change of the magnetic field, a mutual braking resistance between the conductive disc412′ and the second rotating member42is created. In the second embodiment, because the magnetic conductive disc414is located on the first surface of the conductive disc412′ and is opposite to the magnets422, it can guide the magnetic field lines of the magnets422to concentrate and pass through the conductive disc412′, thereby improving the effect of cutting the magnetic field lines of the magnets422when the conductive disc412′ rotates, and thereby increasing the magnitude of the generated eddy currents and resistance. In other embodiments, the first rotating assembly41may not include the magnetic conductive disc414, and can still achieve the effect of generating eddy currents and resistance. Through an actual test of the second embodiment, when the speed of the motor431of the drive unit43is 357 rpm (revolution per minute), the resistance of the transmission belt442is 9.2 kgs.; and when the speed of the motor431is 2051 rpm, the resistance of the transmission belt442is 26.2 kgs. Thus, the resistance device4of this disclosure can indeed achieve the effect of providing resistance by driving the first rotating assembly41to rotate through the drive unit43. At the same time, the resistance can be adjusted by adjusting the rotational speed of the first rotating assembly41driven by the drive unit43. Through the aforesaid description, the advantages of this disclosure can be summarized as follows: 1. According to the received control signal, the drive unit43can drive rotation of the first rotating assembly41(or the second rotating assembly42), so that the conductive first rotating member412,412′ and the second rotating member421,421′ provided with the magnets422can rotate relative to each other, and eddy currents are generated in the first rotating member412,412′, which in turn, provides resistance to the transmission unit44. The resistance device4can control the rotational speed of the first rotating assembly41(or the second rotating assembly42) through the control signal so as to provide a continuous and stable resistance, and can adjust the magnitude of resistance during the training process according to the actual requirements, thereby improving the convenience of use of this disclosure and simultaneously reducing the sound generated during training. 2. With the provision of the resistance device4, and by connecting the transmission unit44to the operating device3, the transmission unit44can provide resistance to operation of the operating device3, so that the user can perform resistance training by operating the operating device3. At the same time, according to the training needs (for example, anaerobic muscle strength training for specific muscle groups, such as hands, legs, back or chest, or lower intensity resistance training used in the field of rehabilitation medicine) in coordination with different forms of the operating device3, a good training effect can be obtained. 3. With the provision of the sensing unit45and the meter device5for the user to monitor information, the convenience of use of this disclosure can be improved. At the same time, the above information can be used to further plan a suitable training program to improve the effectiveness of training. 4. With the provision of the control unit46, the rotational speed of the drive unit43for driving rotation of the first rotating assembly41(or the second rotating assembly42) can be controlled, so that the control unit46can adjust the rotational speed of the drive unit43according to the setting of the user so as to change the magnitude of resistance. Hence, the convenience of use of this disclosure can be further improved. 5. By integrating digital signals, the control unit46and the meter device5can be integrated into a control panel, or by connecting the signals of the control unit46and the sensing unit45to a remote software program (such as a mobile phone APP), the information can be monitored and the resistance can be adjusted at the same time, thereby further improving the convenience of use of this disclosure. 6. With the provision of the heat dissipating fan47, the heat dissipation effect of this disclosure can be achieved, thereby assisting the resistance device4to provide the user with continuous and stable resistance and to avoid the problem of decreasing the resistance when overheated and affecting the training effect. In the description above, for the purposes of explanation, numerous specific details have been set forth in order to provide a thorough understanding of the embodiment (s). It will be apparent, however, to one skilled in the art, that one or more other embodiments may be practiced without some of these specific details. It should also be appreciated that reference throughout this specification to “one embodiment,” “an embodiment,” an embodiment with an indication of an ordinal number and so forth means that a particular feature, structure, or characteristic may be included in the practice of the disclosure. It should be further appreciated that in the description, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of various inventive aspects, and that one or more features or specific details from one embodiment may be practiced together with one or more features or specific details from another embodiment, where appropriate, in the practice of the disclosure. While the disclosure has been described in connection with what are considered the exemplary embodiments, it is understood that this disclosure is not limited to the disclosed embodiments but is intended to cover various arrangements included within the spirit and scope of the broadest interpretation so as to encompass all such modifications and equivalent arrangements.
22,274
11857822
DESCRIPTION OF THE PREFERRED EMBODIMENTS Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings. FIG.1illustrates a reciprocating transmission structure of an exercise machine according to an embodiment of the present invention. The reciprocating transmission structure2is mounted to a frame1of the exercise machine. In the embodiment of the present invention, the exercise machine is an exercise bike. The frame1may be provided with a seat11, a grip12, a screen13, and so on. Referring toFIG.1,FIG.1A,FIG.2andFIG.3, the reciprocating transmission structure2of the exercise machine comprises a first reciprocating member21, a second reciprocating member22, a main rotating wheel23, and a first link24, a second link25, a resistance wheel26, a first rotating wheel27, a second rotating wheel28, and a third rotating wheel29. The screen13is configured to display the parameters, such as the user's exercise time, the resistance of the resistance wheel26, and so on. The screen13may be a touch screen, allowing the user to directly manipulate the screen13for adjusting the resistance of the resistance wheel26, etc. However, this part is not the improved feature of the present invention, so this part is not described in detail. The first reciprocating member21includes a first reciprocating pivot portion211and a first force-receiving portion212that are located away from each other. The first reciprocating pivot portion211is pivotally connected to the frame1. A first pivot portion213is provided between the first reciprocating pivot portion211and the first force-receiving portion212. The second reciprocating member22includes a second reciprocating pivot portion221and a second force-receiving portion222that are located away from each other. The second reciprocating pivot portion221is pivotally connected to the frame1. A second pivot portion223is provided between the second reciprocating pivot portion221and the second force-receiving portion222. The first force-receiving portion212includes a first operating member2121. The first operating member2121is pivotally connected to the first reciprocating member21. The second force-receiving portion222includes a second operating member2221. The second operating member2221is pivotally connected to the second reciprocating member22. The main rotating wheel23is pivotally connected to the frame1through a rotating shaft231. The main rotating wheel23includes a first face232and an opposing second face233. The first face232is provided with a first face pivot portion2321at a position away from the rotating shaft231. The second face233is provided with a second face pivot portion2331at a position away from the rotating shaft231. A first end of the rotating shaft231extends out of the first face232. The first end is secured to one end of a first arm234. The first arm234extends in the radial direction of the main rotating wheel23. The other end of the first arm234is formed with the first face pivot portion2321. A second end of the rotating shaft231extends out of the second face233. The second end is secured to one end of a second arm235. The second arm235extends in the radial direction of the main rotating wheel23. The other end of the second arm235is formed with the second face pivot portion2331. An included angle between the first arm234and the second arm235in the radial direction of the main rotating wheel23is between 0° and 180°. In the embodiment of the present invention, the included angle is 180 degrees. In actual implementation, it may be adjusted to 0 degrees or other angles to meet the needs of a different user (the user is not shown in the drawing). One end of the first link24is pivotally connected to the first pivot portion213, and the other end of the first link24is directly or indirectly pivoted to the first face pivot portion2321. One end of the second link25is pivotally connected to the second pivot portion223, and the other end of the second link25is directly or indirectly pivoted to the second face pivot portion2331. The resistance wheel26is pivotally connected to the frame1. The resistance wheel26is directly or indirectly driven by the main rotating wheel23. The first rotating wheel27and the second rotating wheel28are coaxially pivoted to the frame1. The main rotating wheel23drives the first rotating wheel27through a first belt271. The second rotating wheel28drives the third rotating wheel29through a second belt281. The third rotating wheel29and the resistance wheel26are coaxially pivoted to the frame1. The resistance wheel26includes a resistance member261. The resistance member261is a magnetic resistance member or a friction member. The resistance member261acts on the resistance wheel26to generate a resistance. Through the magnetic resistance member or the friction member, the adjustment for a desired resistance is easier. In actual implementation, the resistance wheel26may be a wind resistance wheel. Referring toFIG.1,FIG.4andFIG.5, when in use, the user rides on the seat11, grasps the grip12with both hands, and extends both feet into the first operating member2121and the second operating member2221. Since the first operating member2121and the second operating member2221are foot pedals with straps, the tightness of the straps can be adjusted according to the user's feet. This is convenient for the user to operate. If the user feels uncomfortable, the frame can be adjusted to change the height of the seat11so that the user can exercise comfortably. After all adjustments are completed, the user's both feet can apply force on the first operating member2121of the first reciprocating member21and the second operating member2221of the second reciprocating member22in reverse. The first reciprocating member21is located on the right side of the user, and the second reciprocating member22is located on the left side of the user. Assuming that in the original state (that is, when the user has not exerted any force as shown inFIG.2andFIG.3), the first operating member2121is lower than the second operating member2221. After the user exerts force, the second operating member2221is pressed down by the user's left foot, and the first operating member2121is driven up by the user's right foot. When the first operating member2121is driven up by the user, the first reciprocating member21will reciprocate upward with the first reciprocating pivot portion211as its axis. After the first reciprocating member21moves upward, the first pivot portion213is also driven upward. After the first link24pivotally connected to the first pivot portion213is driven by the first pivot portion213, the first link24drives the first face pivot portion2321to approach the first reciprocating member21, so that the main rotating wheel23is driven by the first arm234to rotate with the rotating shaft231as its axis. Because the main rotating wheel23drives the first rotating wheel27through the first belt271, after the main rotating wheel23rotates, the first rotating wheel27also rotates. When the second operating member2221is pressed down, the second reciprocating member22will reciprocate downward with the second reciprocating pivot portion221as its axis. After the second reciprocating member22moves downward, the second pivot portion223is also driven downward. After the second link25pivotally connected to the second pivot portion223is driven by the second pivot portion223, the second link25drives the second face pivot portion2331to move away from the second reciprocating member22. The main rotating wheel23is driven by the second arm235to increase the power to rotate with the rotating shaft231as its axis. Since the first rotating wheel27and the second rotating wheel28are coaxially pivoted to the frame1, when the first rotating wheel27rotates, the second rotating wheel28also rotates synchronously. Referring toFIG.5andFIG.6, because the second rotating wheel28drives the third rotating wheel29through the second belt281, after the second rotating wheel28rotates, the third rotating wheel29will also rotate. Because the third rotating wheel29and the resistance wheel26are coaxially pivoted to the frame1, when the third rotating wheel29rotates, the resistance wheel26also rotates synchronously, so that the user can enhance the intensity of exercise through the resistance provided by the resistance wheel26. Please refer toFIG.2andFIG.3again. The user exerts force once again. The first operating member2121is pressed down by the user's right foot, and the second operating member2221is driven up by the user's left foot. Therefore, the first reciprocating member21moves downward, and the second reciprocating member22moves upward. After the first reciprocating member21moves downward, the first link24is driven by the first pivot portion213, and the first link24drives the first face pivot portion2321to move toward the first reciprocating member21, so that the main rotating wheel23is driven by the first arm234to rotate with the rotating shaft231as its axis. After the second reciprocating member22moves upward, the second link25is driven by the second pivot portion223, and the second link25drives the second face pivot portion2331to approach the second reciprocating member22. The main rotating wheel23is driven by the second arm235to increase the power to rotate with the rotating shaft231as its axis. After the main rotating wheel23rotates, the first rotating wheel27, the second rotating wheel28and the third rotating wheel29drive the resistance wheel26to rotate, so that the user can enhance the intensity of exercise through the resistance provided by the resistance wheel26. Please refer toFIG.1,FIG.2andFIG.3again. No matter whether the resistance wheel26includes the resistance member261or the resistance wheel26is directly the wind resistance wheel, it can provide the user with sufficient resistance for the user to choose a desired resistance according to his/her needs. Besides, because the resistance wheel26is provided, after the user operates the first reciprocating member21and the second reciprocating member22for reciprocating movement, through the coordinated transmission of the components of the reciprocating transmission structure2of the exercise machine, the resistance wheel26is driven to rotate, thereby enhancing the intensity of exercise and satisfying the user who needs a stronger intensity of exercise. Since the first reciprocating member21and the second reciprocating member22perform reciprocating movement and drive the main rotating wheel23to rotate through the reciprocating movement, the exercise machine is suitable for performing exercises related to reciprocating movement, such as reciprocating leg exercise or reciprocating arm exercise. In particular, the aforementioned reciprocating movement is transmitted to the main rotating wheel23. The reciprocating movement is converted into a rotary motion. Then, the main rotating wheel23transmits power to the resistance wheel26. The resistance of the resistance wheel26can be easily controlled by means of friction resistance, magnetic resistance, wind resistance, etc., making the operation easier. In the above embodiment, when the included angle between the first arm234and the second arm235is 0 degrees, the first reciprocating member21and the second reciprocating member22reciprocate synchronously. When the included angle between the first arm234and the second arm235is 180 degrees, the first reciprocating member21and the second reciprocating member22reciprocate alternately. When the included angle between the first arm234and the second arm235is greater than 0 degrees and less than 180 degrees, the first reciprocating member21and the second reciprocating member22reciprocate alternately and asymmetrically. The exercise machine of the above-mentioned embodiments is an exercise bike as an example, but the present invention may be applied to other types of exercise machines. Although particular embodiments of the present invention have been described in detail for purposes of illustration, various modifications and enhancements may be made without departing from the spirit and scope of the present invention. Accordingly, the present invention is not to be limited except as by the appended claims.
12,314
11857823
DESCRIPTION OF EXEMPLARY EMBODIMENTS AND BEST MODE The present invention is described more fully hereinafter with reference to the accompanying drawings, in which one or more exemplary embodiments of the invention are shown. Like numbers used herein refer to like elements throughout. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be operative, enabling, and complete. Accordingly, the particular arrangements disclosed are meant to be illustrative only and not limiting as to the scope of the invention, which is to be given the full breadth of the appended claims and any and all equivalents thereof. Moreover, many embodiments, such as adaptations, variations, modifications, and equivalent arrangements, will be implicitly disclosed by the embodiments described herein and fall within the scope of the present invention. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. Unless otherwise expressly defined herein, such terms are intended to be given their broad ordinary and customary meaning not inconsistent with that applicable in the relevant industry and without restriction to any specific embodiment hereinafter described. As used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one”, “single”, or similar language is used. When used herein to join a list of items, the term “or” denotes at least one of the items, but does not exclude a plurality of items of the list. For exemplary methods or processes of the invention, the sequence and/or arrangement of steps described herein are illustrative and not restrictive. Accordingly, it should be understood that, although steps of various processes or methods may be shown and described as being in a sequence or temporal arrangement, the steps of any such processes or methods are not limited to being carried out in any particular sequence or arrangement, absent an indication otherwise. Indeed, the steps in such processes or methods generally may be carried out in various different sequences and arrangements while still falling within the scope of the present invention. Additionally, any references to advantages, benefits, unexpected results, or operability of the present invention are not intended as an affirmation that the invention has been previously reduced to practice or that any testing has been performed. Likewise, unless stated otherwise, use of verbs in the past tense (present perfect or preterit) is not intended to indicate or imply that the invention has been previously reduced to practice or that any testing has been performed. Referring now specifically to the drawings, a flying disc training device according to one embodiment of the present disclosure is illustrated inFIG.1and shown generally at broad reference numeral10. The exemplary training device10incorporates a rigid training disc11for being gripped by a user, a nylon swivel strap12attached to the training disc11, and an exchangeable elastic resistance band14. The resistance band14includes a first nylon band coupler15and metal carabiner16which releasably attaches to the swivel strap12. A free end of the resistance band14comprises a second fabric band coupler18and carabiner19. The carabiner19is adapted for being releasably attached to a selected anchor strap21. As demonstrated inFIGS.7-10and discussed further below, the selected anchor strap21is designed for being temporarily secured to a fixed structure when exercising using the exemplary training device10. A separate flexible safety wrist tether22is releasably attached to the swivel strap12, and may have an adjustable wrist strap23to custom fit the particular user. Used properly in a prescribed routine, such as the exemplary “6 Week Workout” outlined in Table 1, the present training device10may function to improve release velocity, distance and accuracy when throwing a flying disc. TABLE 1DAY 1DAY 2DAY 3DAY 4DAY 5WEEKSPEED WORKOUTRestSPEED WORKOUTRestSPEED WORKOUT12 sets - Yellow Band2 sets - Yellow Band2 sets - Yellow Band10-12 Reps10-12 Reps10-12 RepsWEEKSPEED WORKOUTRestSPEED WORKOUTRestSPEED WORKOUT23 sets - Yellow Band3 sets - Yellow Band3 sets - Yellow Band10-12 Reps10-12 Reps1 sets - Green Band10-12 RepsWEEKSPEED WORKOUTRestSPEED WORKOUTRestSPEED WORKOUT33 sets - Yellow Band3 sets - Yellow Band3 sets - Yellow Band1 set - Green Band1 set - Green Band1 set - Green Band10-12 Reps10-12 Reps10-12 RepsWEEKRelease Angle WorkoutRestRelease Angle WorkoutRestRelease Angle Workout42 Sets - Yellow Band2 Sets - Yellow Band2 Sets - Yellow Band8-10 Reps8-10 Reps8-10 RepsISOMETRIC HOLDISOMETRIC HOLDISOMETRIC HOLDYellow & Green BandYellow & Green BandYellow & Green Band2 Sets Position 12 Sets Position 12 Sets Position 12 Sets Position #22 Sets Position #22 Sets Position #2Hold for 5 seconds Each SetHold for 7 Seconds Each SetHold for 7 Seconds Each SetWEEKElbow Pull WorkoutRestElbow Pull WorkoutRestElbow Pull Workout52 Sets - Yellow Band2 Sets - Yellow Band2 Sets - Yellow Band8-10 Reps8-10 Reps8-10 RepsISOMETRIC HOLDISOMETRIC HOLDISOMETRIC HOLDYellow & Green BandYellow & Green BandYellow & Green Band2 Sets Position 12 Sets Position 12 Sets Position 12 Sets Position #22 Sets Position #22 Sets Position #2Hold for 5 seconds AFTERHold for 5 seconds AFTERHold for 5 seconds AFTERyour muscle starts to shake.your muscle starts to shake.your muscle starts to shake.WEEKSpeed WorkoutRestSpeed WorkoutRestSpeed Workout62 Sets - Green Band2 Sets - Green Band2 Sets - Green Band8-10 Reps8-10 Reps8-10 RepsISOMETRIC HOLDISOMETRIC HOLDISOMETRIC HOLDYellow & Green BandYellow & Green BandYellow & Green Band2 Sets Position 12 Sets Position 12 Sets Position 12 Sets Position #22 Sets Position #22 Sets Position #2Hold for 5 seconds AFTERHold for 5 seconds AFTERHold for 5 seconds AFTERyour muscle starts to shake.your muscle starts to shake.your muscle starts to shake. Referring toFIGS.2-6, the exemplary disc11of training device10is fabricated of a molded homogenous plastic, and comprises a centerpoint31, top side32, bottom side33and a thick annular rim34. As best shown inFIGS.3and6, first and second ends of the nylon swivel strap12A,12B are folded at the centerpoint31of the disc11, and define respective fastener holes35,36aligned on top and bottom sides32,33of the disc11with a centerpoint disc hole38. An integ rally-molded annular reinforcement collar39is located at the centerpoint disc hole38and projects from the bottom side33of the disc11. A cylindrical metal sleeve bearing41resides within the cylindrical opening defined by the reinforcement collar39at the centerpoint disc hole38. With the sleeve bearing41closely assembled inside the reinforcement collar39and the fasteners holes35,36of swivel strap12properly aligned with hole38, complementary-threaded male and female fasteners44,45are inserted through the swivel strap12and sleeve bearing41on respective top and bottom sides32,33of the disc11. The fasteners44,45are tightened using a screwdriver or other conventional tool. As shown inFIG.6, the sleeve bearing41is slighter longer than a combined depth and thickness of the disc11and reinforcement collar39, thereby allowing the swivel strap12to freely pivot relative to the disc11. The exemplary swivel strap12forms an intermediate loop12C which extends beyond the annular rim34of the disc11and creates an attachment point for the elastic resistance band14. In one exemplary embodiment, the present training disc11is between 20-23 cm in diameter and weighs between 200 and 300 grams. The outer rim34of the training disc may have a beveled edge and a maximum thickness of between 2-4 cm, while the thin body portion of the disc inside the outer rim may have a thickness no greater than 1 cm. The exemplary disc11may be custom designed and molded for incorporating in the present training device10. In exemplary embodiments, the present training device10incorporates one or multiple elastic resistance bands14—e.g., 5 pound and/or 8 pound bands. SeeFIG.11. The resistance band14creates tension and stability in the torso throughout the disc golf throwing motion, while engaging multiple stabilizer muscles and enhancing coordination and balance. The exemplary resistance bands14are fabricated of double dipped, heavy-duty tubular latex, and may offer many different predetermined levels of resistance. The exemplary bands14are color-coded to indicate the different resistance levels. For example, green bands may indicate resistance of 5 to 8 pounds, red bands may indicate resistance of 8 to 12 pounds, blue bands may indicate resistance of 12 to 16 pounds, black bands may indicate resistance of 16 to 20 pounds, purple bands may indicate resistance of 20 to 30 pounds, navy bands may indicate resistance of 30 to 40 pounds, and brown bands may indicate resistance of 40 to 50 pounds. Each resistance band is 122 cm (48 inches) long and lightweight. FIGS.7-10demonstrate exemplary uses of the present training device10. InFIGS.7and8, the detachable anchor strap21comprises a door anchor51designed for being wedged between a door and door frame for indoor training. The height of the door anchor51(and resistance band14) should be set just above the elbow, and the safety wrist strap23applied and adjusted to the particular user. In use, the training disc11is moved by the user inline with a notional plane (trajectory) of the opposing force created by the tensioned resistance band14, while also being inline with an imaginary target line of the disc golf throw.FIGS.9and10illustrate an alternative anchor strap61applicable for securing the training device10to a tree, utility pole, fence post, or other upright structure. The exemplary anchor strap61(or “tree strap”) comprises elongated flexible nylon webbing designed to wrap around the tree and having a first looped end62(or D-ring) through which a second looped end63extends. The carabiner19of fabric coupler18releasably attaches to the looped end63to temporarily secure the resistance band14to the tree. As with indoor training, using a throwing motion, the user moves the training disc11inline with a notional plane of the opposing force created by the resistance band14—the notional plane being inline with an imaginary target line of the disc golf throw. Attaching the training device10to a tree or other fixed outdoor structure, allows the user to quickly, conveniently and fully warm-up before competitions. In other embodiments, the present training device10may incorporate multiple elastic resistance bands14,14′ such as shown inFIG.11. The interchangeable elastic resistance bands14,14′ combine to offer customized resistance levels of the exemplary training device10. For the purposes of describing and defining the present invention it is noted that the use of relative terms, such as “substantially”, “generally”, “approximately”, and the like, are utilized herein to represent an inherent degree of uncertainty that may be attributed to any quantitative comparison, value, measurement, or other representation. These terms are also utilized herein to represent the degree by which a quantitative representation may vary from a stated reference without resulting in a change in the basic function of the subject matter at issue. Exemplary embodiments of the present invention are described above. No element, act, or instruction used in this description should be construed as important, necessary, critical, or essential to the invention unless explicitly described as such. Although only a few of the exemplary embodiments have been described in detail herein, those skilled in the art will readily appreciate that many modifications are possible in these exemplary embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the appended claims. In the claims, any means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures. Thus, although a nail and a screw may not be structural equivalents in that a nail employs a cylindrical surface to secure wooden parts together, whereas a screw employs a helical surface, in the environment of fastening wooden parts, a nail and a screw may be equivalent structures. Unless the exact language “means for” (performing a particular function or step) is recited in the claims, a construction under 35 U.S.C. § 112(f) [or 6th paragraph/pre-AIA] is not intended. Additionally, it is not intended that the scope of patent protection afforded the present invention be defined by reading into any claim a limitation found herein that does not explicitly appear in the claim itself.
13,037
11857824
DETAILED DESCRIPTION OF THE INVENTION Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment. Furthermore, the described features, structures, or characteristics of the invention may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention. It is an object of the present invention to provide a strength training apparatus for outdoor use with a hitch receiver and/or ground receiver (or receptor). FIG.1is an isometric, environmental perspective view of a strength training apparatus for outdoor use with a hitch receiver in accordance with the present invention. The hitch assembly116inserts into the tow bracket104mounted to the bottom of the vehicle102. The hitch assembly116takes the place of the trailer hitch ball which would normally insert into the tow bracket104. The hitch assembly116comprises an L-shaped bracket106. The L-shaped bracket106comprises a cantilever114adapted to insert into the tow bracket104on an automobile102. The L-shaped bracket106further comprises a vertical post109affixed at a proximal end to the cantilever114. The distal end118of the vertical post109may be adapted to rest at or near a ground or subgrade surface in some embodiments, or to hang free in other embodiments. A plurality of anchor points110circumscribe the distal end118of the vertical post108. The vertical post109and cantilever114may be formed of angle iron, tubular components, or solid shafts. The vertical post109and cantilever114may be formed of steel, aluminum, titanium or alloy or polymeric materials. The anchor points110may comprise steel loops affixed to the distal end118of the vertical post108. Alternatively, the anchor points110may comprise a plurality of bores traversing the vertical post109. The L-shaped bracket106may also comprise one or more jack screws112adapted to form a friction fit with a cruciform300, further described below, which is adapted to insert into the open top end124of the L-shaped bracket106. The jack screws112may position on forward, side, or rear surfaces of the cantilever. The apparatus100may comprise one or more annular, threaded skirts134which circumscribe a bore traversing the cantilever114. These annular threaded skirts134are adapted to receive a threaded bolt which may traverse the cantilever114and the hitch through the bore136. Alternatively, a depressible pin may substitute for the annular, threaded skirts134adapted to snaps into aperture136on the hitch receiver or ground receptor and secure the cantilever. FIG.2is a top perspective view of a crossbeam302of a strength training apparatus for outdoor use with a hitch or ground receiver in accordance with the present invention. The apparatus100-500may comprise a cross member (or crossbeam)302. The cross member302may be formed from steel, alloy, metallic or polymeric components and may be tubular or sold. In various embodiments, the cross member302is square as shown from a top cross-sectioned view. A track202runs across a surface of the cross member302. The cross member302may be arcuate. The track202defines a plurality of bores204adapted to secure in place a shuttle210which travel on the track202. The shuttle210may comprise an outwardly-protruding hook212adapted to detachable affix to one or more straps504further described below. In place of straps504, rope, bands, elastomeric tubes, or chain may be used. In various embodiments, the track202is affixed to the cross member302, but is some embodiments, the track202is formed therewith. The cross member302may be formed as a single integrated piece with the cruciform. The cross member302(i.e., crossbeam) may comprise bores204traversing either the track202, the post206, or both. In some embodiments, the shuttle210is adapted to travel up and down the post108. In various embodiments, the track202runs from a ground surface to a predetermined upward height. The cross member302may comprise one or more jack screws208. The cross member302may insert into brackets904or studs. FIG.3is a top perspective view of a cruciform of a strength training apparatus for outdoor use with a hitch receiver or ground receiver in accordance with the present invention. The cruciform300comprises the upright108and a crossbeam302which affixes perpendicularly to the upright108. The crossbeam302may comprise any rigid beam adapted to affixed using brackets, bolts, screw or other means known to those of skill in the art at, or near, a midpoint of the crossbeam302. Anchor points110may position toward the top of the upright108as shown. FIG.4is a side perspective view of various upright components of a strength training apparatus for outdoor use400in accordance with the present invention. In various embodiments, the upright108(i.e. “stipes”) comprises a plurality of tubular components402,404, and804which detachably affix together. These components402,404may telescope one from the other in some embodiments. The upright components comprise a ground receiver804, which is a hollow tubular and elongated components which inserts into a ground and/or subgrade surface. The ground receiver804is capped at one end with an end on the opposing side. The ground receiver is positioned within a subgrade such that the open top end disposes superiorly to the capped bottom end. The elongated component404slidably inserts into the open top end of the ground receiver804and stops when the flange406comes into contact with the ground receiver804. In this configuration, the upright component402rises superiorly to the remaining components above the ground surface and subgrade such that additional components forming the cruciform are attachable therewith. FIGS.5-6illustrate environmental perspective views of a strength training apparatus for outdoor use with a hitch receiver500,600or ground receiver in accordance with the present invention. In various embodiments, the apparatus500comprises a plurality of elongated straps504formed from flexible, polymeric, inelastic or elastic material(s). These straps504are adapted to selectively affix to any of the anchor points110, the shuttle210, or the crossbeam302. A user502exercises using the apparatus500. Handles506may affix to distal ends of the straps504for use in performing various exercises. In various embodiments, the apparatus500includes a cross strap504cadapted to pull the strap504inwardly. As shown inFIG.6, the crossbeam302may also comprise a plurality of tracks202b-cupon which additional shuttles210b-ctravel. Like shuttle210a, shuttles210b-cmay travel on the crossbeam302itself rather than tracks202b-c. The shuttles210b-ccomprise hooks212for anchoring straps504. In some embodiments, the apparatus600comprises a plurality of crossbeams302. FIG.7is an environmental perspective view of a strength training apparatus for outdoor use with a hitch receiver700in accordance with the present invention. As shown, straps504may affix to anchor points110at a top/distal end of the upright108. FIG.8is an environmental perspective view of a strength training apparatus800for outdoor use in accordance with the present invention. The apparatus800comprises an upright108(or upright) joined to a track202. Anchor points110position at the top of the upright108and may also position on the lower half of the upright108, as well as at lateral (or distal ends) of the crossbeam(s)302. In the shown embodiment, the upright108inserts into a ground or subgrade802surface. The upright108may position within a tubular, rectangular, receiver within the ground or the upright108may be affixed to a stud402which inserts into the ground receptor804. The upright108and stud402may be formed as a single integrated piece. In various embodiments, the track202is oriented perpendicularly to the crossbeam(s)302and overlaps the same on the upright108. In various embodiments, a rectangular receiver804positions within the ground surface configured to receive one of the upright108, the stud402, and/or the cruciform. The receiver804may jut upwardly slightly from the ground surface802. FIG.9is a perspective view of a strength training apparatus for outdoor use900in accordance with the present invention. In various embodiments, the apparatus900comprises a plurality of crossbeams902which insert into a bracket904affixed to, or welded to, the upright108. The brackets may also be called studs. In various embodiments, the bracket904defines one or more lateral recesses906for receiving the crossbeams902. These lateral recesses may be defined by studs jutting laterally from the upright904. FIG.10is a perspective view of a strength training apparatus for outdoor use1000in accordance with the present invention. In various embodiments, tracks1002affix to a forward surface of the crossbeam(s)902. FIG.11is an environmental perspective view of a handle1100of a strength training apparatus for outdoor use in accordance with the present invention. In various embodiments, the handle1100comprises a dumbbell1102, or any weighted handles, having weighted upper and lower terminal ends1104as shown. The handle1100comprises, or consists of, two attachment points1106a-b, which are both affixed on the peripheral outer edge of a terminal end1104such that the attachment points1106a-bare in parallel with each other and a center shaft1108of the handle1100interconnecting the terminal ends1104. The attachment points1106comprise uninterrupted loops adapted to detachable affix to the straps504. FIG.12is a top perspective view of a strap1200of a strength training apparatus for outdoor use in accordance with the present invention. In various embodiments, the strap504is partitioned into two halves504a-band an intermediate strap1206which is positioned between the straps504aand504b. This intermediate strap1206is detachable using D-rings1204, carabiners, or other fasteners known to those of skill in the art. The intermediate strap1206is meant to be detachable and disposable. In some variations a cross strap affixes to an anchor point110on the apparatus100-500at a proximal end. The distal end of the cross strap may slidably affix to the intermediate strap1206. The cross strap may travel up and down along the length of the intermediate strap1206such that the intermediate strap1206, rather than the strap504, is adapted in some embodiments to withstand the friction and wear caused by the cross strap504acting upon it. The intermediate strap1206may be formed from polymeric materials or organic materials such as leather. The intermediate strap1206may be formed from materials with low friction surfaces adapted to allow the cross strap504to travel freely on the intermediate strap1206or textured surfaces adapted to prevent travel (and wear) of the cross strap504against the intermediate strap1206. FIG.13Ais a top perspective view of a strap1300of a strength training apparatus for outdoor use in accordance with the present invention. The strap1300comprises an intermediate strap1206attached at both terminal ends to straps504a-b. Handles1302a-bdispose at the terminal ends of the straps504a-b. The straps504a-bmay comprise elastic tubing as shown. FIG.13Bis a top perspective view of a strap1350of a strength training apparatus for outdoor use in accordance with the present invention. Unlike strap1300, strap1350is a single, elongated, flexible strap or tube affixed at distal and proximal ends to handles1302. The strap1350in this embodiment comprises a flexible sleeve1352circumscribing a medial portion of the strap1350. This flexible sleeve1350may be slipped onto the strap1350, heat-pressed onto the strap1350, threaded onto the strap1350, or otherwise applied using means known to those of skill in the art. In some embodiments, the flexible sleeve1350is a polymeric coating and/or adhesive which is applied to the strap1350. In various embodiments, the flexible sleeve is formed from polymeric or organic materials, such as nylon or leather. FIG.14is an environmental perspective view of a strength training apparatus1400for outdoor use with a hitch or ground receiver in accordance with the present invention. In various embodiments, the intermediate strap1206positions within an anchor point110or shuttle210and is adapted to prevent wear occasioned by the anchor point110on the intermediate strap1206. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
13,604
11857825
DETAILED DESCRIPTION OF EMBODIMENTS The present invention is a weight training auxiliary device for adjusting a weight training load for muscle building and the method of use therefor, comprising: a plurality of weights that generate a load; a load transmission mechanism that transmits to training equipment a load using the weights; and a mechanism for attaching and detaching the weights, the weight training auxiliary device and the method of use therefor characterized in that part or all of the weights have an electromagnet or a permanent electromagnet built in, the mechanism for attaching and detaching the weights includes a conduction relay that controls the supply of power to the electromagnet or the permanent electromagnet, and the weight training load is adjusted by attaching or detaching a portion of the weights. Using the mechanism of the present invention, it is possible to adjust the weight training load by attaching and detaching a portion of the weights, and possible to easily do effective weight training. The present invention is also characterized in that a part or all of the weights has an electromagnet or a permanent electromagnet built in, the mechanism for attaching and detaching the weights includes a conduction relay that controls the supply of power for weights with the electromagnet or the permanent electromagnet built in, and the weight training load is adjusted by attaching or detaching a portion of the weights. The present invention functions in the same manner whether the electromagnet or the permanent electromagnet is adapted. However, when using the permanent electromagnet, when the supply of power is off, the weights are held, and it is possible to release the weights by supplying power to the weight of the necessary site only when necessary, so this also has the advantage of energy saving. The present invention is also characterized in that a weight training machine used connected with the weight training auxiliary device is one of a stack type machine, a weight machine, and free weight equipment. The method of the present invention can be applied to any of various types of weight training machine and weight training equipment requiring these different configurations. The present invention is also characterized in that the plurality of weights that generate the load are a mixture of weights with an electromagnet or a permanent electromagnet built in, and non-magnetic weights that do not include an electromagnet or a permanent electromagnet. This makes it possible to increase the number of weights that can be attached and detached in a single action, and enabling efficient use. It is also possible to use a material with cushioning properties in non-magnetic weights, which is effective in reducing impact and sound when they drop. The present invention is also characterized in that the plurality of weights that generate the load is a mixture of weights with an electromagnet or a permanent electromagnet built in, and non-magnetic weights that do not include an electromagnet or a permanent electromagnet, and the weights with an electromagnet or a permanent electromagnet built in and the weights that do not include an electromagnet or a permanent electromagnet are alternately stacked. This similarly makes it possible to increase the number of weights that can be attached and detached in a single action, and enabling efficient use. Furthermore, with a free weight machine or free weights, if several auxiliary devices are attached to the shaft, and the response level of each auxiliary device is changed, even more efficient use is possible. The present invention is also characterized in that the weight training auxiliary device includes a control device, and the control device receives control signals and controls the mechanism for attaching and detaching weights. The present invention is also characterized in that the control signals input to the control device are signals from a wired switch, a wireless switch, a voice sensor, a sound pressure/sound pitch sensor, an acceleration sensor, a gyro sensor, or a plurality of these. The present invention is also characterized in that the sensor is a voice sensor or a sound pressure/sound pitch sensor, and the mechanism for attaching and detaching the weights is controlled according to instructions by vocalization of a trainee or a trainer. This makes is possible to adjust the load according to the required instructions without interrupting training, making it possible to easily do effective weight training. The present invention is also characterized in that the sensor is an acceleration sensor, signals from the sensor are analyzed, and the mechanism for attaching and detaching weights is controlled according to a preset program. The present invention is further characterized in that the sensor is an acceleration sensor, the acceleration sensor is installed in weight side equipment of a stack type machine, or near the weights of a weight machine, or free weight equipment, the state of the trainee and/or the state of the weight side equipment or the state near the weights is detected, and the mechanism for attaching and detaching the weights is controlled according to a preset program. The present invention is also characterized in that a gyro sensor is also used together as a sensor. Control that utilizes these sensor information makes even more efficient load adjustment possible, so it is possible to do easy and effective weight training, and this is also effective in terms of ensuring safety. The present invention is also characterized in that in a stack type machine, a conduction relay that controls the supply of power to weights with a built-in electromagnet or permanent electromagnet of a weight training auxiliary device also serves as a weight stopper pin, and the weight can be set manually by setting the conduction relay and the weight stopper pin. In the case of this manual method, it is possible to use the same configuration as is. The present invention is also characterized in that in weight machines or free weight equipment, a conduction relay that controls the electromagnet or the permanent electromagnet and the power supply is arranged on a collar connected to a shaft of the weight training auxiliary device, and the weight can be set by attaching and detaching the weights attached by the electromagnet or the permanent electromagnet by setting the conduction relay. In general, adjusting the weights in real time is very difficult in weight machines or free weight equipment, but this is realized by the configuration of the present invention. It is also possible to efficiently adjust the load in weight machines or free weight equipment, making it possible to do easy effective weight training, and this is also effective in terms of ensuring safety. The present invention is a weight training auxiliary device comprising a mechanism for adjusting a weight training load for muscle building, in particular, avoiding danger by reducing the load by releasing a portion of the weights in an emergency, comprising a plurality of weights for generating a load, and a mechanism for releasing a portion of the weights in an emergency, characterized in that when the trainee is in a danger state due to an excessive load, danger is avoided by releasing a portion of the weights by an instruction from the trainee or a dispatch from a mechanism that detects the trainee state. Especially when doing in-depth training, it is necessary to do training at the load weight limit of the trainee, but in that case, when the limit is exceeded during training, there is the risk of dropping the training equipment and causing damage to the equipment, or if dropped on the trainee's body, this can lead to injury, or in the worst case even death. The present invention is effective in reducing these risks. The present invention is also characterized in that a mechanism for emergency release provided in a portion of the weights normally holds weights released in an emergency using an electromagnet or a permanent electromagnet, and a subject weight is released by disabling the magnetic force in an emergency. This makes reliable and immediate release possible. The present invention is also characterized in that a mechanism for emergency release provided in a portion of the weights normally holds the weights released in an emergency using a J-shaped holding tool, and the subject weight is released by operating the J-shaped holding tool to make the opening direction downward in an emergency. The present invention is also characterized in that the weights released in an emergency contain magnetic material. The present invention is also characterized in that the weights released in an emergency contain a soft magnetic material. The present invention is also characterized in that for the weights released in an emergency, iron, silicon steel, permalloy, or an amorphous magnetic alloy is used. This makes it possible to operate by particularly controlling magnetic force. The present invention is also characterized in that for the weights released in an emergency, an item for which a granular material containing iron sand is stored in a bag or a container is used. The present invention is also characterized in that the weights released in an emergency comprise a holding plate part for normally holding the weights to be released in an emergency using an electromagnet or a permanent electromagnet, and the holding part is held in contact with a prescribed position of the holding plate by a permanent magnet or an electromagnet provided on the training equipment side. The present invention is also characterized in that the holding plate is formed using a magnetic material. The present invention is also characterized in that the holding plate is formed using a soft magnetic material. The present invention is also characterized in that for the holding plate, iron, silicon steel, permalloy, or an amorphous magnetic alloy is used. The present invention is also characterized in that the holding plate holds the holding part in contact with a prescribed position of the holding plate by an electromagnet or a permanent electromagnet provided on the training equipment side, and only the part excluding the prescribed position is coated by resin. This makes it possible to accurately define the mounting position, which is effective for reliable operation. The present invention is also characterized in that in the holding plate, the thickness of the magnetic material is 6 mm or greater. As a result, holding with sufficient magnetic force is realized. The present invention is also characterized in that the weights released in an emergency are connected to the holding plate, and the main weight part is pillar-shaped. The present invention is also characterized in that the weights released in an emergency are connected to the holding plate, the main weight part is pillar-shaped, and a part or all is covered by a cushioning material, or a cushioning member can be attached and detached. This increases safety when dropped, and is also effective for equipment maintenance. The present invention is also characterized in that the mechanism for releasing a portion of the weights in an emergency comprises at least an attachment part that is attached to the shaft of the training machine or the training equipment, a holding part that holds the weights to be released in an emergency, and releases the weights in an emergency, and a control part that receives instructions from the trainee or signals from the mechanism that detects the state of the trainee, and controls the function of the holding part. The present invention is also characterized in that the attachment part attached to the shaft of the training machine or the training equipment has a hole through which the shaft of the training machine or the training equipment penetrates, the hole has an inner diameter that approximately fits the shaft of the training machine and the training equipment, and is rotatable so that the weights released in an emergency are always positioned below the shaft of the training machine or the training equipment. The present invention is also characterized in that the structure that is rotatable so that the weights released in an emergency are always positioned below the shaft of the training machine or the training equipment comprises a bearing. As a result, the weights are not biased during training, and can be reliably dropped during emergency dropping. The present invention is also characterized in that the mechanism that releases a portion of the weights in an emergency comprises a power supply. The present invention is also characterized in that the mechanism that avoids danger by releasing a portion of the weights by an instruction from the trainee or a dispatch from a mechanism that detects the trainee state when the trainee is in a danger state due to an excessive load is activated by detection of vocalization of the trainee, or by sensing changes in the breathing sounds including wheezing. As a result, it is possible to disconnect the load not only in a case when the trainee gives an active instruction to disconnect, but also in a state when vocalization is difficult, and this contributes to improved safety. The present invention is also characterized in that the mechanism that avoids danger by releasing a portion of the weights by an instruction from the trainee or a dispatch from a mechanism that detects the trainee state when the trainee is in a danger state due to an excessive load is activated by activation of a switch attached to or installed near the trainee's arms, legs, fingers, toes, or lips, or by sensing a change with the sensors. With this configuration, it is possible to give accurate instructions even in the case of a trainee with a disability, for example. The present invention is also characterized in that in the mechanism that is activated by detection of a vocalization by the trainee or sensing of changes in breathing sounds including wheezing, a microphone is installed near the neck, jaw or lips on the trainee side, and a speech recognition device is provided that judges emergencies from the sounds gathered by the microphone, and when the speech recognition device judges there to be an emergency, a signal is sent to the mechanism that releases a portion of the weights in an emergency. The present invention is also characterized in that the speech recognition device is a speech recognition device using a sound pressure/sound pitch sensor or a microcomputer. The present invention is also characterized in that communication between the speech recognition device and the mechanism that releases a portion of the weights in an emergency is performed wirelessly. As a result, the weight training auxiliary device is realized that particularly avoids danger by reducing the load by releasing a portion of the weights in an emergency, and thus, it is possible to do repetitive exercise safely to the strength limit point, and possible to obtain the effect of maximum strength enhancement and increased muscle mass. Embodiment 1 Following, a weight training auxiliary device of the present invention and the method of use therefor are explained using the drawings.FIG.1is a schematic diagram showing an example of the configuration of a typical load generating device for stack type training. A normal weight plate7is set using a weight stopper pin8, and transmission to a training machine or equipment is done via a load transmission mechanism5. In this way, training is normally performed only using a preset load, and when adjusting the load, the stopper pin had to be removed and inserted to adjust the number of weight plates. FIG.2is a schematic diagram showing an example of typical free weight equipment. Similarly in this case as well, in resistance training done in an unsupported state (barbells, dumbbells, etc.), training is performed using a preset load using weights. In this drawing, in addition to a main weight, provided are sub weights that can be attached and detached, but this attachment and detachment must similarly be done manually in advance. In contrast to this, with the weight training auxiliary device of the present invention and the method of use therefor, realized is a weight training auxiliary device and the method of use therefor that makes it possible to adjust the load of the weight training equipment easily using a simple configuration, and thus, it is possible to safely do repetitive exercise to the strength limit point, and possible to obtain the effect of maximum strength enhancement and increased muscle mass.FIG.3is a schematic diagram showing an example of the configuration of a stack type training machine of a load generating device for training of the present invention. Here, as shown in the left side of the drawing, while electricity is flowed to the electromagnet inside the main weight plate fixed by a stopper pin to the desired number of weight plates, the weight plates for dropping are in a state attached by magnetic force. The weight plates for dropping are in a state like floating in the air, and as shown in the right side of the drawing, if the power supply is turned off and the magnetic force is gone, they drop down. With free machines and free weight types, the weights for dropping attached to the electromagnet included with the collar auxiliary device drop down if the power supply is turned off and there is no magnetic force. Turning on and off of the power supply can be done with wires or wirelessly using a switch or sensor. Here, in this embodiment, the configuration uses an electromagnet, and operation is also explained using this case, but it is also possible to configure the present invention using a permanent electromagnet, and in this case, though the on and off operation of the power supply is reversed, the same action can be obtained. FIG.4is a schematic diagram showing an example of a configuration in which the load generating device for training of the present invention is used for free weight equipment. With free machines and free weight types, the weights for dropping that are attached to the electromagnet included with the collar auxiliary device drop down if the power supply is turned off and there is no magnetic force. Turning on and off of the power supply can be done with a wire or wirelessly using a switch or sensor. In this way, in free weight equipment as well, it is possible to exhibit the same function if using the mechanism of the present invention on sub weights. Here, in this embodiment, the configuration uses an electromagnet, and operation is also explained using this case, but the present invention can also be a configuration using a permanent electromagnet, and in this case, though the on and off operation of the power supply is reversed, the same action can be obtained. FIG.5is a schematic diagram showing an example of a configuration of a stack type training machine of the load generating device for training of the present invention, and is an example of a configuration combined with a voice sensor. Using the voice recognition sensor, whether wired or wirelessly, it is possible to drop the weights by turning on and off the current in response to vocalization by a trainee or a trainer, and possible to adjust the load in real time. FIG.6is a schematic diagram showing an example of a configuration in which the load generating device for training of the present invention is used for free weight equipment, and is an example of a configuration combined with a voice sensor. In this embodiment as well, it is possible to adjust the load in real time with free weight equipment. FIG.7is a schematic diagram showing an example of a configuration of the load generating device for training of the present invention, and is an example of a configuration combined with an acceleration sensor. In a stack type or weight machine type, by attaching an acceleration sensor, there is recognition when there is insufficient power for training movements where the speed of the weight rise is extremely slow or the range of motion is not reached, and by turning the power supply on or off, weight plates are dropped to resolve the insufficient power, making more in-depth training possible, and making it possible to ensure safety. FIG.8is a schematic diagram showing an example of a configuration in which the load generating device for training of the present invention is used for free weight equipment, and is an example of a configuration combined with an acceleration sensor and/or a gyro sensor. In the free weight type, it is possible to obtain the same operation as noted above by attaching the acceleration sensor and the gyro sensor. The abovementioned acceleration sensor can be roughly divided for measuring three phenomena of “weight,” “vibration and movement,” and “impact.” By successfully detecting each phenomenon, the output signals of the acceleration sensor can be useful for actual applications. First, the working of the acceleration sensor or the gyro sensor in an automatic method of a stack type machine and/or a weight machine auxiliary device is explained. With the abovementioned training method, when a weight is lifted, upward acceleration works, and a change in the acceleration occurs. That change is sensed, and when the acceleration of the weight is zero or in a range close to zero for a fixed period, there is judged to be insufficient power, and otherwise it is judged to be appropriate. A change in the acceleration also occurs regarding the operation of lowering the weight, but since power is eased for the downward operation, there is basically no need to assist with the weight, and it is not necessary to detect a change in acceleration. In light of that, the up and down direction is judged with the gyro sensor, and by ascertaining the maximum reciprocal range of motion of the weight using the acceleration sensor, a judgment is made of whether the current state is in the outward path or the return path, and when in the return path (during downward operation), even if the acceleration is zero or in a range near zero, this is not judged to be insufficient power. With this judgment, management of turning on and off the conduction is performed (specifically, load adjusting). Here, the maximum range of motion judgment for a stack type machine, weight machine, and/or free weight type automatic auxiliary device is described. With a stack type machine and/or a weight machine, initially, the maximum range (distance) that weights can operate on that machine is set, and by judging the position of the weight with the acceleration sensor, a judgment is made of whether the start point, turnaround point, and end point of the reciprocal movement range of motion are appropriate. In the free weight type automatic auxiliary device, the maximum range of each individual first is set first, and judgment and setting of the appropriate reciprocal movement range of motion are performed. Next, the working of the acceleration sensor of the automatic method in the free weight type auxiliary device is explained. The abovementioned principles can basically be appropriated for the automatic method of the free weight type as well, but there is no set track such as a rail as there is with the stack type machine or the weight machine, and the movement may become irregular, so a gyro sensor is used to detect and respond to various types of operations. This makes it possible to do appropriate load adjusting in real time with free weight type equipment as well. Here, the maximum range of motion judgment is as described above. Use of the gyro sensor is not limited to the free weight type auxiliary device, and can also be utilized to improve the accuracy of movement analysis and load adjustment for the stack type machines and/or weight machine auxiliary devices as well. Embodiment 2 FIG.9is a drawing showing an example of an embodiment of the weight training auxiliary device of the present invention, particularly of an auxiliary device collar for free weights (training machine side attachment device) used in a configuration to avoid danger by reducing the load by releasing a portion of the weights in an emergency. With this embodiment, this device is installed together with the main weight on the barbell shaft for free weight training, and detachable weights are held and used in this device. With this embodiment, a mode was used in which the weights are held using magnetic force. The configuration is such that weights to be released in an emergency are normally held by a permanent magnet or an electromagnet, and subject weights are released by disabling the magnetic force in an emergency. Of course, it is also acceptable to use a configuration in which the weights to be released in an emergency are normally held by a J shaped holding tool, and to have the subject weights released by operating the J shaped holding tool so the opening direction is downward in an emergency. In an auxiliary device collar body for free weights9, a weight holding unit (magnetic force holding unit: electromagnet or permanent electromagnet)10is provided, and this is where the load weight is normally held by magnetic force. There is a hole11on the body through which the shaft of the training machine penetrates, and this is where the shaft of the barbell for free weight training penetrates. Here, the hole has an inner diameter approximately fitted with the shaft of the training machine, and by comprising a bearing, the configuration is such that the weight to be released in an emergency can rotate to always be in a lower position than the shaft of the training machine. A receiving module12is provided on the auxiliary device collar body for free weights9, and signals are received from the trainee side device making it possible to execute the necessary operation. Also comprised are a battery13, an activation switch14, and a booster15. FIG.10is a drawing showing an example of an embodiment of a weight for free weights (weight) according to the weight training auxiliary device of the present invention. The upper two images show examples of weights formed using an iron pole, and the lower image shows an embodiment of a case when granular material containing iron sand stored in a bag is used as the weight. Of the upper two images showing embodiments in which the weight is formed by an iron pole, the upper image shows the mode from the planar direction, and the lower image from the front direction. With this embodiment, a weight body16is formed using an iron pole, and a holding plate17is connected to this. An iron plate of 6 mm thickness is used for the holding plate17, and other than the holding unit is a resin coating part18. A cushioning material grip19is detachably installed on the weight body16. FIG.11is a drawing showing an example of an embodiment of trainee side equipment according to the weight training auxiliary device of the present invention. With this embodiment, equipment20worn on the trainee's body and a voice recognition transmitter18are separate bodies, and the connection between these is wireless. Here, of course it is also acceptable to connect these using a wire. With this embodiment, the equipment20worn on the body is made to be worn on the neck of the trainee, and the voice is detected using a microphone22. In a case when the trainee is in a danger state due to an excessive load, a signal is generated by a voice recognition signal transmitter21by detection of trainee vocalization or by sensing a change in breathing sounds including wheezing, this is transmitted to the auxiliary device collar body for free weights, and releasing of the load weights is executed. An audio signal transmitter23and a dirt prevention tape24are provided on the equipment20worn on the body. An audio signal receiver25is provided on the side of the voice recognition signal transmitter21, the audio signal from the microphone22of the equipment20worn on the body is received by the audio signal receiver25from the audio signal transmitter23, a judgment is made by a sound pressure/sound pitch sensor26, and a signal is transmitted to the auxiliary device collar body for free weights9. As a result, weight training that is safe with good efficiency is made possible by realizing the weight training auxiliary device that avoids danger by reducing the load by releasing a portion of the weights in an emergency. Here, the present invention is not limited to the modes of the embodiment, and for example it is also acceptable to have a configuration in which activation is done of a switch attached to or installed near the trainee's arms, legs, fingers, toes, or lips, or activation is done by sensing changes by the sensors, and also acceptable for devices to be linked using a wired connection. INDUSTRIAL APPLICABILITY The weight training auxiliary device and the method of use therefor of the present invention realizes a weight training auxiliary device and the method of use therefor in which it is possible to adjust the load of weight training equipment easily using a simple configuration, and thus, it is possible to do repetitive exercise safely to the strength limit point, and possible to obtain a maximum strength enhancement and increased muscle mass effect, which can be said to have great potential for industrial applicability. Also, the weight training auxiliary device of the present invention realizes the weight training auxiliary device that particularly avoids danger by reducing the load by releasing a portion of the weights in an emergency, and thus, it is possible to do repetitive exercise safely to the strength limit point, and possible to obtain a maximum strength enhancement and increased muscle mass effect, which can be said to have great potential for industrial applicability. EXPLANATION OF CODES 1Weight plate with built-in electromagnet or permanent electromagnet2Non-magnetic weight plate3Control device4Conduction relay5Load transmission mechanism6Conducting wire7Normal weight plate8Weight stopper pin9Auxiliary device collar for free weights10Weight holding unit (magnetic force holding unit: electromagnet or permanent electromagnet)11Hole through which shaft of training machine penetrates12Receiving and energizing module13Battery14Activation switch15Booster16Weight body (iron pole)17Holding plate18Resin coating part19Cushioning material grip20Equipment worn on trainee's body21Voice recognition signal transmitter22Microphone23Audio signal transmitter24Dirt prevention tape25Audio signal receiver26Sound pressure/sound pitch relay
30,834
11857826
DESCRIPTION OF THE PREFERRED EMBODIMENT The present invention relates to a versatile weight bar assembly for performing a myriad of exercises including a shorter main bar1and a longer extension bar2. The main bar is formed of an elongated tubular shaft3having a pair of opposing, internally threaded ends4and a central collar5. The extension bar2includes an externally threaded post6at a first end that is configured to couple with either threaded end4of the main bar. At an opposing is a cap7that supports the bar on a floor or other surface when a user is performing certain exercises. The cap7covers an internally threaded end54for coupling additional attachments as described, infra. For example, as depicted inFIG.8, a second extension bar can be secured to the main bar to allow a user to perform various exercises with a wider grip. One or more weight discs8are securable to the main bar, on either side of the central collar. The disc has a central aperture9that receives either the main bar or extension bar depending upon an exercise being performed. Retaining clamps10can be secured to either bar to retain the weighted disc thereon during a given exercise. Each clamp includes a cup-shaped housing11having a central aperture12for receiving either bar. Spring-biased gripping members extend into the central aperture for releasably engaging the outer circumference of either bar. Depressing a button13retracts the gripping members to allow the clamp to be attached or removed. The weight assembly further includes various attachments that allow a user to perform additional exercises. For example, a hammer attachment14includes a weighted cylinder15having a central, threaded bore16for coupling with the threaded post on the extension bar. The hammer attachment allows a user to swing the extension bar like a sledge hammer or ax against a heavy bag or similar padded surface. As depicted inFIG.9, the hammer attachment can be secured to the free end of each of two extension bars coupled with the main bar to allow a user to further vary a desired exercise regime. An attachable neck pad51allows the user to more comfortably perform an exercise when the bar is placed behind the neck. As such, an adapter52including two opposing threaded posts53couple the hammer attachment with the internally threaded end54of the extension bar. Now referring toFIG.10, a rope60having a plug61at its distal end may be spirally wrapped about the main bar. The plug includes a threaded post62that couples with the hammer-attachment bore16to form a wrist roller for strengthening the forearms. A foot attachment17includes a sole portion18with a transverse bore19for receiving the main bar to allow the user to perform various leg exercises. A releasable strap20extends over the top surface of the attachment for securing to a user's foot. Accordingly the user can perform numerous exercises that involve suspending a weight using the leg and feet muscles. The main bar and weight disc can be used independently to perform a myriad of resistance exercises similar to conventional dumbbells. In addition, the threaded post on the extension bar can be secured to either end of the main bar to allow a user to perform various exercises that require an elongated implement having a weighted end. Finally, a first extension bar can be secured to the main bar and the second extension bar can be secured to the opposing end of the first extension bar. Either end of the elongated bar can then be positioned within the corner of a room to allow a user to perform multiple exercises. The above-described device is not limited to the exact details of construction and enumeration of parts provided herein. Furthermore, the size, shape and materials of construction of the various components can be varied without departing from the spirit of the present invention. Although there has been shown and described the preferred embodiment of the present invention, it will be readily apparent to those skilled in the art that modifications may be made thereto which do not exceed the scope of the appended claims. Therefore, the scope of the invention is only to be limited by the following claims.
4,184
11857827
DETAILED DESCRIPTION Examples of a plate-sensing base for a weight-selectable or adjustable free weight (e.g., an adjustable dumbbell or barbell) are described, which may be provided (e.g., to a user) as an exercise system together with the adjustable free weight (e.g., dumbbell or barbell). An adjustable dumbbell or barbell may include a handle assembly and a plurality of weight plates, selectively attachable to the handle, e.g., to opposite ends thereof. The plurality of weight plates and the handle assembly may be configured such that each of the plurality of weight plates can be selectively coupled to and decoupled from the handle assembly through the operation of a selection mechanism. The base is configured to support the adjustable free weight and/or the individual weight plates when not in use. For example, the base may include a support cradle (or simply cradle), which provides at least one recess in which the free weight is placed when not in use. The recess defines a set of plate wells that receive/accommodate a portion of each weight plate when the adjustable free weight is rested on the base. The plate wells are configured to support the individual weight plates generally vertically when in the base (i.e., when not in use). In some embodiments, the adjustable free weight may be an adjustable dumbbell, which may be implemented according to any of the examples in U.S. Pat. No. 7,261,678, entitled “Adjustable Dumbbell System,” and U.S. Pat. No. 10,518,123, entitled “Adjustable Dumbbell System,” the contents of which are incorporated by reference herein in their entirety for any purpose. In other embodiments, the adjustable free weight may be an adjustable barbell, which may be implemented according to any of the examples in U.S. Pub. App. No. 2020/0306578, entitled “Adjustable Barbell System,” the content of which is incorporated by reference herein in its entirety for any purpose. In some embodiments, the exercise system described herein includes at least one plate-sensing base and at least one adjustable free weight (e.g., an adjustable dumbbell or barbell). In some embodiments, the exercise system includes a pair of plate-sensing bases and adjustable free weights (e.g., a pair of adjustable dumbbells). In some embodiments, the exercise system includes a single plate sensing base and corresponding set of weights, together with multiple, differently shaped handle assemblies (e.g., a straight bar, a curl bar, etc.) for an adjustable barbell system. The plate-sensing base of the present disclosure includes a plate sensing assembly for detecting the presence or absence of individual weight plates in the base when the handle is removed, and thus determining the weight of the free weight when removed from the base. In some embodiments, the base is equipped with a communication interface and is configured to communicate to an external computing device, in some cases automatically upon removal of the handle from the base, the identified plates on the base and/or the determined weight of the free weight. In some embodiments, the plate-sensing mechanism is implemented using a combination of mechanical components (e.g., rigid members such as translating or pivoting levers/arms) which cooperate with one or more sensors of a sensor assembly. Each of the mechanical components has a portion extending into an individual plate well and is biased to extend into the plate well. The individual weights, when placed into their respective plate wells, interact with the respective mechanical component (e.g., actuate the rigid member against the biasing force), which movement in turn communicates to a processor, via the sensor(s) associated with the mechanical components, the presence or absence of weights in the plate wells. In some embodiments, the plate-sensing base is configured to communicatively couple to one or more external computing devices to communicate the determined weight of the dumbbell to the external computing device(s). Such a plate-sensing base may thus also be referred to as a connected (or smart) base and may be provided as part of a smart or connected adjustable free weight system. The external computing device may be any computing device of the user of the adjustable free weight, such as a personal mobile device (e.g., a tablet or a smartphone), a laptop, a smart TV or any other computing system that receives the weight selection(s) from the smart base for use in exercise tracking or fitness coaching.FIG.1is an illustration of an adjustable dumbbell10with a connected base20in an exemplary operational environment according to the present disclosure. The adjustable dumbbell10and the connected base20may be part of a free weight exercise system100in which the base20is configured to communicate with an external computing device30. While only a single dumbbell is shown, the free-weight exercise system100may include a set of free-weights, e.g., a pair of dumbbells as are shown as part of the fitness coaching system38. Also, while the free weight is illustrated as a dumbbell in the exemplary system100, in other examples the free weight system100may include a different type of free weight such as an adjustable barbell. The connected (or smart) base20is configured to support the free weight (e.g., dumbbell10) when not in use. The free weight (e.g., dumbbell10) includes a handle grip14operatively associated with a weight selection mechanism12forming a handle assembly15to which one or more of the weight plates16are selectively attachable, based on a selection made by the user via the weight selection mechanism12. The base20is configured to support any weight plates16of the free weight (e.g., dumbbell10) that are not attached to the handle assembly15when the handle assembly15is removed from the base, e.g., when picked up by the user for performing exercise. The connected base20is configured to communicate with one or more external computing device(s)30. By “external” when describing the one or more computing devices30it is implied that the components thereof (e.g., the processor(s), display(s), memory, communication link(s), etc.) are not part of (e.g., integrated into) the adjustable free weight (e.g., the dumbbell) and its base. The external computing device(s)30with which the base20communicatively couples may have various other separate and/or unrelated uses to that associated with the smart base20. The external computing device30may be any type of portable computing/communication device (e.g., a laptop32, a tablet, or a smart phone34, etc.). The external computing30device may, in some embodiments, be a smart/connected TV36or a smart/connected display39of a fitness/coaching system38or other fitness system such as a stationary exercise machine (e.g., an elliptical machine, a stationary bike, etc.) equipped with a display console. The external computing device(s)30may be any other suitable computing device(s) that includes at least one processor, display(s) and communication link(s) for receiving and displaying information based on signals from the base20, e.g., for enhancing the user's exercise experience. In some embodiments, the smart base20is configured to communicate directly with the external computing device(s)30, such as via a short range wireless communication protocol (e.g., Bluetooth). In some embodiments, the smart base20may, additionally or alternatively, be configured to communicate with the external computing device(s)30through a wireless network40. The base20may be configured to communicate with the external computing device(s)30via any suitable communication protocols, such as, but not limited to, Bluetooth, Bluetooth Low Energy (BLE), ZigBee, Wireless USB, Wi-Fi, or others. A smart base20according to the present disclosure may be configured communicate with the one or more external devices via any suitable number of communication links (e.g., a first communication link22, a second communication link24, etc.). Also, the smart base20may be configured to establish multiple communication links to different devices (e.g., pairing with two or more of the user's personal devices, such as their smart phone and their smart TV). Moreover, the external computing devices30may include computing device with distributed computing functions (e.g., having/accessing storage and/or services residing remotely, such as in the cloud). FIGS.2-4show components of an exercise system200, including a plate-sensing base201according to some embodiments of the present disclosure. The plate sensing base201ofFIGS.2-4may be used to implement the base20of the system100inFIG.1. InFIGS.2and4, the base201is shown together with the handle assembly (or simply handle)202of an adjustable free weight, which in the present example is an adjustable dumbbell208. In other embodiments, the free weight may be an adjustable barbell. The base201is shown alone, without the free weight, inFIG.3for illustrating the various features thereof. Referring toFIGS.2and4, the dumbbell208includes a selection mechanism204for selectively coupling a desired amount of weight to the handle202by way of selectively coupling different combinations of the plurality of weight plates (or simply weights)206to the handle202. The dumbbell208of the present example is configured to operate with ten weight plates206, which are grouped in two sets of five on opposite sides of the handle202. In other examples, the adjustable free weight (e.g., dumbbell or barbell) may be configured for selectively coupling a different number of weights to the handle (e.g., a number fewer or greater than 10). In the present example, the individual weights206are coupled to the handle202between separator discs205spaced axially along the length of the dumbbell208, on opposite sides of the handle202. In other embodiments, the weights, separator discs and/or other features of the adjustable free weight (e.g., the type or placement of the selection mechanism204) may be different. Referring now also toFIG.3, the base201includes a cradle210that supports the weight plates206and handle202when not in use. Any of the weight plates206that are unused (i.e. non-attached) to the handle remain in, and are supported by, the cradle210when the handle202is removed from the base201. The cradle210includes positioning walls215that divide each of the two recesses214into a number of plate wells212corresponding to the number of weight plates206of the free weight. Each of the plate wells212is configured to accommodate an individual one of the plurality of weight plates206. The plate wells212are configured to support the unused weight plates in a generally upright (or vertical) position to enable easy and fast alignment of the weights206into their respective plate slots203of the handle202when the handle202is returned to the base (e.g., after an exercise). The cradle210may also serve as an enclosure or housing of the base201substantially enclosing or concealing various internal components of the base (e.g., mechanical and sensor components of the plate-sensing assembly and other electronics of the base). The base201is configured to detect the presence or absence of the individual weight plates206in the cradle for determining the weights remaining in the base and thus the weight attached to the handle. Referring for example toFIG.4, the base201includes a plate-sensing assembly220attached to the cradle210. In some embodiments, the plate-sensing assembly220is implemented using a combination of mechanical components (e.g., rigid members such as pivoting levers) and electronic sensors arranged to individually sense the presence or absence of a weight206in a given plate well212. For example, the plate sensing assembly220may include a plurality of rigid members222, each associated with a respective one of the plate wells212. Each of the rigid members222is movably coupled to the cradle210such that it can move between a first (elevated or released) position and a second (lowered or depressed) position. Each rigid member222may be biased toward the first position (e.g., by a spring217) and may include a plunger portion (or simply plunger)224which protrudes into the respective plate well212when the rigid member222is in the first position. The plunger224protrudes through an opening in the cradle210into the plate well212. The plunger224is positioned in the plate well (e.g., along a bottom surface of the plate well) so as to contact and be depressed by the respective weight plate206when the weight plate206is placed in the plate well212. As such, when a weight plate206is placed in its corresponding plate well212, the weight206acts against the biasing force of the spring217moving the rigid member222, via its plunger224, to its second (lowered or depressed) position and when the weight plate206is removed from its corresponding plate well212, the downward force on the spring is released, and the rigid member222and its plunger224move to their first position under the biasing force of the spring217. In some embodiments, the plunger224is at or below the bottom surface of the plate well212, thus not substantially protrude into the plate well212, when the plunger224and rigid member222are in the second position. In other embodiments, the plunger224may be above the base surface of the plate well212when in the second position. Irrespective of the particular configuration of the plate-sensing assembly, when a given rigid member222is in the first position, a larger amount of the its plunger224protrudes into the plate well212than when the rigid member is in the second position. Movement of the rigid member from the first to the second position displaces the plunger downward towards the foot209of the base. Any suitable biasing element, such as a spring217, may be used to bias the individual rigid members222toward the second position in the absence of a weight in the plate well. In some embodiments, a single plate-sensing assembly220is provided to sense the presence or absence of weights in one of the recesses214, which information is used to extrapolate the presence or absence of weights in the other recess, e.g., by assuming that weight plates206are symmetrically coupled to the handle202. Having a single sensor assembly can reduce the complexity of the system and computational resources required to monitor the states of the sensors of the plate-sensing assembly. In other embodiments, an individual sensor assembly may be provided below each of the recesses214for independently sensing the presence or absence of weights206coupled to each side of the handle202. In some embodiments, the rigid members222are implemented by a set of pivoting levers.FIGS.5A and5Bshow an isometric view and a top view, respectively, of a plate-sensing assembly500according to the present disclosure, andFIGS.7A and7Bshows partial section view of the plate-sensing assembly500, illustrating its operation. The plate-sensing assembly500may be used to implement the plate-sensing assembly220of the base201inFIG.2. The plate-sensing assembly500includes a plurality of levers502which pivot between a first position (e.g., as shown inFIG.7B), which is also referred to as elevated, raised or released position, and a second position (e.g., as shown inFIG.7A) and which is also referred to as lowed or depressed position. Each lever502has a first end509pivotally coupled to the support structure504and an opposite, second end506that pivots about the levers pivot axis505. The second end506may thus also be referred to as the free end or pivoting end506of the lever502. The support structure504may be configured to pivotally support and couple each of the levers502to the underside of the cradle210such that the levers502are operatively positioned below the recess214with the plunger508of each lever502extending through an opening207(seeFIGS.7A and7B) in the cradle201and into the respective plate well212. To that end, the support structure504may include a corresponding plurality of pivot mounts523, each configured to pivotally receive the first end506of the respective lever502. Each lever502may be pivotally coupled to the support structure504, and thus to the cradle210, via any suitable pivot joint (e.g., a pin joint). For example, and referring also toFIG.6, each lever502may include a pin or axle512, oriented transversely to the elongate portion513, and thus to the length-wise dimension, of the lever502. The axle512of each lever502is pivotally received in a passage or eye514defined by the support structure504(e.g., by the corresponding pivot mount523). During use, each lever502is pivotable about its respective pivot axis505to move between the first and second positions. In some embodiments, each of the levers502may have a unique form factor (i.e. shape and size) for accommodating operative placement of the pivoting levers502underneath a contoured surface of the recess214. In some embodiments, a subset of the levers (e.g., levers502-1,502-2, and502-3) may have substantially the same form factor, reducing the part-count of unique components of the plate-sensing assembly200. In some such embodiments, the support structure504(e.g., the location of the mounts523) may be configured to substantially align some or all of the levers502horizontally (i.e. so their axes505lie in substantially the same vertical plane extending out of the page ofFIG.5B), vertically (i.e. so their axes505lie in substantially the same horizontal plane parallel to the page ofFIG.5B), or both, which may facilitate operative placement of the levers relative to a contoured recess214while maintaining a compact form factor. Each of the levers502further includes a portion configured to protrude through the cradle, which is also referred to as protruding portion, plunger portion or simply plunger508. The plunger508may be positioned near the lever's free (pivoting) end506. When operatively assembled, the plunger508of each lever502may extend into a respective plate well212(see e.g.,FIGS.2and3A, andFIG.7B) when the lever502is in the first (raised) position. In the present example, the plungers508extend generally perpendicularly to the elongate portion513of the respective lever502generally perpendicularly to the lengthwise dimension of the respective lever502. A biasing element, such as a coil spring or any other suitable type of compression spring517, biases each of the levers502toward the first position (e.g., as shown inFIG.7B). The compression spring517may be substantially aligned with, such that it is positioned substantially directly below, each plunger508in some embodiments. In some such embodiments, the spring517may be received in a spring housing519below the plunger508. The spring housing519may operatively couple the spring517to its respective lever. In other embodiments, the levers502may be differently biased, such as with a torsion spring operatively associated with each of the axles512. In use, each lever502is physically actuated, through contact with the respective weight206(e.g., via its plunger508) between the first (relatively higher) and second (relatively lower) positions. Each lever502further interacts, through its movement between the first and second positions, with a corresponding sensor532to communicate (e.g., to a processor) the position of the lever502and thus the presence or absence of a weight206in a given plate well212. Each of the levers502includes a sensor engagement portion507. In some embodiments, the sensor engagement portion507of each lever502is located at or near the lever's free (pivoting) end506. In the present example, the plate-sensing assembly500uses hall effect sensors to detect the position of each lever502, and thus the presence or absence of a weight in the base. In other examples, different types of sensors may be used, as will be described further below. In the example inFIGS.5A-7B, the sensor engagement portion507of each lever502is implemented by a magnet534which is carried, in a magnet seat511, at the free end506of the respective lever502. Each of the magnets534is thus fixed to, and moves with, the free end506of the respective lever502as the lever502pivots between the first and second position (e.g., as shown inFIGS.7B and7Arespectively). The magnet seat511may be implemented by a recess525located at the free end506of the lever502. The recess525is configured to accommodate a respective magnet534at least partially therein. For example, the recess525may have a shape corresponding to the shape of the magnet. In some examples, a substantially circular recess may be provided for a circular magnet. Any other suitable shape of the recess and magnet may be used. In some embodiments, the magnet534may be keyed to the seat511such that it only fits in the seat in one or limited number of orientations. Each of the magnets534may be press-fit, and additionally optionally glued to their respective seat511. Each of the recesses525may have a top opening to accommodate passage of the magnet534and thus facilitate insertion of the magnet534into the seat511. In some embodiments, the sidewalls515that define the recess may be interrupted, providing a side opening521, which may expose a side of the magnet534oriented along (or facing) the length-wise direction of the lever502, to facilitate a more effective engagement with the hall effect sensor. The sidewalls515may encircle the magnet534only partially but sufficiently so as to capture the magnet therein, preventing removal of the magnet along the length-wise direction. In some embodiments, the axle512, the plunger508, and the magnet seat511of a lever502are integrally formed with the elongate portion513whereby the respective lever502is implemented as an integral/unitary body503. A variety of different types of sensors may be used to implement the sensors532. In the example inFIGS.5A-7B, each of the sensors532is implemented by a hall effect sensor. Thus, the plate-sensing assembly500includes a plurality of hall effect sensors532, each positioned to interact with a corresponding lever502, e.g., via the respective magnet534which is fixed to, and thus moves with, the respective lever502. The movement of the magnet534rails (or shifts) the corresponding hall effect sensor532between its low and high states. In some embodiments, the hall effect sensors532are positioned such that each sensor532generates a high voltage when the lever502is in the first (or elevated/released) position and a low voltage when the lever502is in the second (or lowered/depressed) position. In other embodiments, a reverse alignment may be used such that the sensor(s)532are instead railed (or shifted) to a high voltage state when the lever502is in the lowered position. Each of the sensors532is connected to a circuit (e.g., on a printed circuit board (PCB)530) which generates one or more signals indicative of the states (e.g., high or low voltage) of each sensor532, also referred to as sensor state signal(s). The sensor state signal(s) are communicated to a processor that determines the combination of weight plates206remaining in the base upon removal of the handle202, and consequently the weights206attached to, and thus the total weight of, the free weight208. The processor may be mounted to the PCB530, directly thereto, such as on the side opposite the sensors, or indirectly via any suitable combination of electric conductors (e.g., a flex PCB or ribbon cable). FIGS.8A and8Bshow a plate-sensing assembly800according to further examples of the present disclosure. The plate sensing assembly800may be used to implement the plate sensing assembly200of the base201ofFIG.2. The plate-sensing assembly800may include a number of components similar to those of the plate-sensing assembly500but instead of using hall effect sensors, the mechanical components interact with optical sensors. For example, similar to the plate sensing assembly500, the plate-sensing assembly800includes a plurality of pivoting levers802, each of which is pivotally coupled to a support structure804. Similarly, each lever804includes an elongate body813, an axle812, a plunger808and a sensor engagement portion807. Each of the levers802is pivotally coupled at its one end to the support structure804, and is biased (e.g., by a respective spring817) toward a raised position in which the plunger808protrudes through the base (e.g., as shown in phantom line inFIG.8B). The individual sensors832in this example are optical sensors. For example, each sensor may be an optical interrupt sensor which includes first and second sensor portions832-1and832-2, respectively, spaced apparat from, and arranged to face, one another. The first sensor portion832-1may be the optical transmitter832-1(e.g., a light source such as an LED) and the second sensor portion832-2may be the optical receiver (e.g., a light detector), or vice versa. The sensor engagement portion807is implemented by a flag811, which is operatively positioned on the lever to move between a first position when the plunger is in the first, elevated position (as shown in phantom line inFIG.8B) and a second position when the plunger is in the second, lowered position (shown in solid line inFIG.8B). In some embodiments, the optical interrupt sensor is position such that the flag811blocks or interrupts the light of sight between the optical transmitter and receiver when the lever802is in the raised position. In other embodiments, the optical sensor is positioned such that the interrupted state (e.g., a low or null value) of the sensor832is instead associated with the lowered position of the lever. Similar to the prior example, each of the plurality of sensors832may be connected to a circuit (e.g., provided on a PCB830) for communicating the sensor signals to a processor. In other embodiments, a different type of optical sensors may be used in place of photo-interrupters of the plate-sensing assembly800. For example, each sensor832may be a photo sensor having the transmitter and receiver located on the same side of the flag811as opposed to opposite sides thereof. In such embodiments, the transmitter is arranged to transmit light towards the flag, when the flag is in the light of sight of the transmitter, and the receiver is arranged to detect light reflected (e.g., by the flag). When the lever is in a position in which the flag does not substantially block the light of sight of the light transmitter, smaller amount or no reflected light is detected by the receiver, resulting in a different signal (e.g., a low voltage state) of the sensor. Also, it should be noted that while the mechanical components (e.g., levers502of the assembly500and levers802of the assembly800) are shown as pivotally coupled to the base201in the illustrated examples, in other embodiments, the mechanical components (e.g., rigid members) that move between the first and second positions to interact with the sensors may be differently movably coupled to the base. For example, each of the rigid members, which may be implemented by a lever or other suitable rigid structure, may instead be supported in a track defined by the support structure, and may be configured to translate up and down, rather than pivot, to raise and lower the plunger portion of the lever. The plate sensing assembly may be implemented using various other combinations of mechanical and electrical components interacting to detect the presence or absence of the individual weights in the base. For example, as shown inFIGS.9A and9B, each of the mechanical components may be implemented by a rigid member902, which has a portion902protruding into the plate well. The rigid member902is pivotally coupled to the cradle210and is made from an electrically conductive material to act as a switch. The switch is biased toward the closed position (as shown inFIG.9B), in which the switch closes the sensing circuit932. When the switch is depressed (i.e. when a weight plate206is present) the switch rotates out of position and breaks electrical contact, thereby interrupting the circuit932. When the weight plate206is removed (as shown inFIG.9B), the switch springs back into position (i.e., with plunger portion902extending into the plate well) with its free end making electrical contact and closing the sensing circuit932associated with that particular plate well. A similar sensing circuit is provided for each plate well associated with at least one of the two recesses of the base, such that the presence or absence of the weights can be individually detected. The states of the sensing circuits932are communicated (e.g., via PCB930, to a processor such that the weights remaining on the base, and consequently the weights attached to the handle can be detected upon removal of the handle from the base. In some embodiments, the sensing circuit may alternatively or additionally includes sensors in-line with resistors, such that each combination of docked plates provides a unique summation of total resistance to indicate user's selected weight. Various other types of switches may be used in a similar fashion in other embodiments (e.g., in a linear, pivot action or other). In some embodiments, electronics of the base (e.g., the sensors and/or communication interface) may be powered by an on-board power source (e.g., one or more batteries, which may be rechargeable). In some embodiments, the one or more batteries may be replaceable by the end user, and an battery access panel251may be provided in a convenient location of the cradle such as on a side of the cradle accessible to the user even when the free weight is docked on the base. To conserve power, the base may be configured to operate in different modes, including at least one awake or active mode in which power is provided to the plate-sensing assembly, and a sleep or low power mode, during which the sensor assembly may not be powered. The base may be toggled between these modes in a variety of ways. For example, the base may include a switch for toggling the base from the sleep mode to an awake or active mode. The switch may be connected to a button253(seeFIG.3), which may be part of the base's user interface (U/I) configured to enable the user to activate the base and receive feedback about the operational state of the base, e.g., battery status, connection status, etc. In some embodiments, the base is additionally or alternatively configured to automatically switch to active mode by removal of the handle from the base. In such embodiments, the switch may be additionally or alternatively connected to an activation member255which is engaged (e.g., depressed or released) by the handle when the handle is positioned in or removed from the base. Referring back toFIGS.5A,5Band also toFIG.10, the activation member255may be implemented by a plunger1002extending from a pivoting lever1004. The lever1004is pivotally coupled to the support structure504and thus to the cradle210of the base201. The activation member255(e.g., lever1004and plunger1002) are biased upward towards the handle by a spring1017. Placement of the handle on the base acts against the spring force, depressing the member255downward. In some embodiments, the spring1017acts indirectly on the lever1004, for example through a second pivoting lever1005positioned between the spring1017and the lever1004. The position of the plunger1002is detected by a sensor, for example a hall effect sensor1036, an optical sensor or any other suitable sensor. For example, activation member255may include a sensor engagement portion1007similar to that of the lever502. Depending on the type of sensor used, there may be provided a magnet1006fixed to a seat1008which extends from one of the levers1004or1005(if present) in an operative direction towards the hall effect sensor1036. The sensor1036may be connected to the same PCB530supporting the other sensors532of the plate sensing assembly. Similar to the operation of the pivoting levers502, the movement of the magnet1006caused by the movement of the activation member255up and down rails (or shifts) the hall effect sensor1036between its high and low states to trigger the activation of the base (e.g., waking up the base when the handle is removed) and, responsively, the delivery of power to the plate sensing components of the base. In the example inFIG.10B, which shows a section view of a similar cradle210′, the activation member255may be implemented by a rigid post1152, which is received in a pocket1153defined by its supporting structure (e.g., support structure504or804). The post1152is configured to move substantially vertically between its raised and lowered positions, as constrained by the pocket1153. The post1152includes a protruding portion or plunger1154that penetrates the cradle and is exposed on the user-facing side thereof. The post1154is biased towards the raised position (e.g., as shown inFIG.10B) by a spring1157. Upon placement of the handle in the cradle201′, the plunger1154is depressed by the handle lowering the post1152, which may interact with a sensing component to communicate the position of the post to the switch, or may directly couple the position of the post to the wake-up switch of the base. The post configuration of the activation member may be used in a base according to any of the examples herein (e.g., in base201ofFIG.2). The activation member may be implemented differently in other embodiments herein. FIG.11shows a simplified block diagram of electronic components of a smart base1801and an external computing device1802according to the present disclosure. The electronic components of the base1801may be included in a base according to any examples herein (e.g., base201). Similarly, the electronic components of external computing device1802may be present in the external computing device30ofFIG.1. As shown inFIG.11, the smart base1801according to the present disclosure includes at least a power source1126, one or more sensors1112, one or more I/O devices1118and at least one communication link1114. Optionally, the base1101may include a memory1124and at least one processor1122, e.g., for processing the sensor signals and/or controlling the base's user interface. In some embodiments, sensor data is processed on board the base (i.e. by a processor located in the cradle). In other embodiments, the sensor data is at least partially processed by a processor not housed in the cradle. For example, the final determination of the weight selection of the user may be made by a processor located remotely from the base (e.g., processor1152of the external computing device1151). The external computing device1151includes one or more I/O devices1160, communication link(s)1156, and at least one processor1152, memory1154and a power source1158. The power source1126of the base1101and the power source1258of the computing device1151may be implemented by on-board power (e.g., a battery), which may be rechargeable in some embodiments. Any suitable battery technology may be used, e.g., Nickel-Cadmium (NiCd), Nickel-Metal Hydride (NiMH), lithium-ion (Li-ion), lithium-sulfur, graphene aluminum-ion, solid state, etc. Additionally or alternatively, the base1101and/or computing device1151may be configured to be powered by an external power source, via a wired connection or wireless connection, e.g., to the grid. The I/O device(s)1118of the base1101may include one or more input devices (e.g., the button253, a keyboard, a touchpad, etc.) and one or more output device (e.g., one or more status indicators which may be implemented by one or more discrete LEDs, an LED display, and ELD display, or a display of any other suitable type). The I/O device(s)1160of the external computing device1151may include at least one display1162(e.g., for displaying information relating to the exercise system), which may be implemented by any suitable display technology such as Liquid crystal display (LCD), LED, Organic LED, Plasma display (PDP), Quantum dot (QLED) display, etc. The I/O device(s)1160may further include various other input and output devices such as a microphone, a speaker, a keyboard, a touchpad, and/or a touchscreen. The communication links1114and1156of the base1101and computing device1151, respectively, may be implemented using any suitable wireless communication interface/technology, such as Bluetooth, Bluetooth Low-Energy (BLE), ZigBee, Near-Field Communication (NFC), Wi-Fi, a cellular communication technology, such as GSM, LTE, or others. The processor1122, which may be interchangeably referred to as controller, and the processor1152may be implemented by any suitable processor type including, but not limited to, a microprocessor, a microcontroller, a digital signal processor (DSP), a field programmable array (FPGA) where the FPGA has been programmed to form a processor, a graphical processing unit (GPU), an application specific circuit (ASIC) where the ASIC has been designed to form a processor, or a combination thereof. For example, the processors1122and/or1152may include one or more cores, which may include one or more arithmetic logic units (ALUs), floating point logic units (FPLUs), digital signal processing units (DSPUs), or any suitable combinations thereof. The processors1122and/or1152may further include one or more registers communicatively coupled to the core(s), which are implemented by any suitable combination of logic gates and/or memory technologies. The processors1122and/or1152may include one or more levels of cache memory coupled to the core(s) for providing data and/or computer-readable instructions to the core(s) for execution. The cache memory may be implemented by any suitable cache memory type, for example, metal-oxide semiconductor (MOS) memory such as static random access memory (SRAM), dynamic random access memory (DRAM), and/or any other suitable memory technology. The on-board memory1124of the base and the memory1154of the external computing device1151may be implemented, in part, by the cache memory of respective processor and may thus include volatile memory. The memory1124and or memory1154may also include non-volatile memory, in some embodiments, which may be implemented using any suitable non-volatile memory technology such as Read Only Memory (ROM) (e.g., masked ROM, Electronically Programmable ROM (EPROM), or others), Random Access Memory (RAM) (e.g., static RAM, battery backed up static RAM, Dynamic RAM (DRAM), or others), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash memory, or others. The electronic components of the base and external computing device may be communicatively connected using any suitable circuit(s)1120and1164, respectively (e.g., a data bus). A base according to any of the examples herein (e.g., base201) may include a button for activating the sensing function of the base.FIG.12shows an example of a user interface (U/I)1200that may be used to implement the user interface a base according to the present disclosure (e.g., base201). The U/I may include at least one button1210, and a status indicator1211, which may be implemented by one or more lights (e.g., first status light1212, second status light1214, etc., either or both of which may be an LED light or any other suitable light) or by another suitable feedback device, such as an audible indicator (e.g., a speaker). In some embodiments, only a single status light is used to provide various status information such as battery level, connection status, etc. In other embodiments, a dedicated light may be included to provide different status information, for example the first status light1212may signal connection status, while the second status light1214may signal battery level. In some embodiments, the pressing of the button1210activates (or wakes up) the base201such as by causing power to be provided to the sensor assembly thereby activating the sensing function of the base201. In some embodiments, the button1210may additionally, optionally, be used for establishing a wireless connection between the base201and a wireless network or directly with an external computing device30(e.g., via Bluetooth pairing). In other embodiments, two separate buttons may be provided, one for activating the base201and one for establishing a wireless connection to the base201. In some embodiments, the base201may be configured to wake up automatically without the user pressing the button1210, such as in response to the removal of the handle from the base201. In other embodiments, the waking of the base201is performed via the user interface1200(e.g., by pressing button1210) and the removal of the handle202causes the automatic transmission of one or more signals (e.g., sensor state signal(s)) to the external computing device30, if the base201is communicatively connected to (e.g., paired with) an external device30. In some embodiments, additional functions may be invoked by the button1210, or additional buttons may be included to provide other functionality by the base201. At any given time, the base201may be in any one of a plurality of operational modes or states. The U/I1200may also exist in different states in which the U/I1200exhibits different behaviors, depending on the operational mode of the base201. Table1300inFIG.13shows different U/I states and the different behaviors of the U/I1200associated therewith. For example, when the base201is in a first operational mode, interchangeably referred to as low-power, sleep or standby mode, the U/I may be in a first state1302, in which the status indicator (e.g., status light1212) is Off. In some cases, the same U/I state may be associated with two or more different modes of the base201. For example, the U/I state1302may also be associated with the operational mode of the base201in which the base201conserves power while in active, connected mode, for example when the battery of the base201is low (e.g., if battery charge is at 25% or less, if battery power for only 8 hrs of active use remains, or some other predetermined battery level). This mode may also be referred to as power conservation mode, and the U/I1200may exists in the same state as when the base201is in the sleep mode, in which the status light is Off, even though the base may be in active use (e.g., sensing and/or transmitting signals). If the base201is awake but not connected, the base201may be referred to as existing in an “awake but not connected” mode. In this mode, the U/I1200may exist in a second state1304, in which the status light is On (e.g., a continuous or solid light), and has a first color (e.g., white or other predetermined color). If the base201is in connecting mode (e.g., the base201is discoverable or in the process of pairing, such as when using Bluetooth connectivity), the U/I1200may exist in a third state1306, in which the status light1212is intermittently On and Off (i.e. blinking) in the same color as the “awake but not connected” mode. In some embodiments, the color of the blinking status light in the third state1306may be different from the color of the continuous/solid light of the second state1304. Once a wireless connection has been established with the base201(e.g., the base201is paired to an external computing device30), the U/I1200transitions to a fourth state1308, in which the status light is On (continuously) but has a different color than when the base is not connected and/or pairing (e.g., Blue or other predetermined color different from the color of the second and/or third states1302,1304, respectively). Finally, if the power supply (e.g., battery) of the base201is low (e.g., below a threshold percentage of charge and/or below a predetermined amount of active use time), the status indicator may provide a warning of the low battery state such as by blinking a predetermined number of times (e.g., 3, 4, 5 times, in some cases more), in a distinct color (i.e. different from the colors used for other, active operational states), for example a red or orange color, and may then, optionally, turn off (or time out) to conserve power, at which point the U/I1200may transition into the state1302. As previously mentioned, in some embodiments, the U/I1200may include separate indicators for status (e.g., first status light1212) and battery level (e.g., second status light1214). In some embodiments, the battery level indicator (e.g., second status light1214) may be configured to communicate the level of battery power as it is depleted. In other embodiments, the battery level indicator (e.g., second status light1214) may be configured to operate as low battery indicator which activates only when the battery level falls below a predetermined level (e.g., a power level providing 10 hours (or less) of active use). In some such embodiments, the battery level indicator (e.g., second status light1214) may be tied to the operation of the status indicator (e.g., first status light1212) in that the battery level indicator (e.g., second status light1214) is only on when the status indicator (e.g., first status light1212) is on. This ensures that the battery level indicator is only On and using power when the user is likely to be interacting with the base and can thus see the indicator, thereby preserving battery power. The battery level indicator (e.g., second status light1214) may be configured to follow the same time-out process as the status indicator, e.g., as described further below with reference toFIG.15. In some embodiments, the various status indications (e.g., low battery, connection status, etc.) associated with the base/adjustable dumbbell may be communicated to the user via the external computing device to which the base is connected, in addition to or instead of the indicator(s)1211. The base201is configured to transition to an active (or awake) state, in which power is provided to the sensing components, when the button1210is manipulated in a predefined manner (e.g., pressed once). In some embodiments, the base201additionally or alternatively transitions to awake state automatically upon removal of the handle202from the base201.FIG.14shows a flow diagram of a process1400via which the base201, and consequently the U/I1200, transition from the low power (or sleep) mode to active, connected mode. The base201is initially in the low-power mode, as shown in block1402. The base201may exist in this state when the base201is not in active use (e.g., after the handle202has been on the base201for a set period of time). In the embodiment inFIG.14, the pressing of the button1210causes the base201to wake up, and the processor of the base determines if a wireless connection has been previously established with the base. For example, when using a Bluetooth connection, the processor determines if the base has previously been paired with a device, as shown in block1404. If the answer at block1404is yes, attempt to re-stablish the previously set up connection is made. Continuing with the Bluetooth example, the base attempts to locate the previously paired device, as shown in block1406. If a device the base was previously paired with is present, the base automatically re-pairs with that device. While the base is attempting to re-establish connection, the U/I exists in the associated state (e.g., the third state1306ofFIG.13). When the wireless connection has been established (e.g., upon successfully re-pairing with the external device, as shown in block1410), the base transition to an active, connected state, as shown in blocks1412and the U/I shifts to the associated state (e.g., the fourth state1308ofFIG.13). If the answer at block1404is No (e.g., the base has not been previously paired), the base transitions to an “awake but not connected” mode, as shown in block1408, and its U/I shifts to the associated state (e.g., the second state1304ofFIG.13). In addition, and if the result of attempting to re-establish the previous connection (e.g., the device is not present or pairing is not successful for some other reason) is not successful at block1410, the base may similarly transition to the “awake but not connected state” (block1410) and the U/I would transition to the associated state (e.g., state1304). As shown in block1408, the base and U/I may remain in such states for a predetermined period of time (e.g., before the base times out and returns to asleep mode) or until user input is received (e.g., the pressing of button1210). The flow diagram of process1500inFIG.15illustrates the conditions under which the base201and its U/I1200transition back to low power (or sleep) mode. This process1500may thus also be referred to as a “time-out” or “return to sleep” process. As can be seen in block1502, if the base remains inactive for a predetermined period of time, e.g., 15 seconds, 20 second, 25 seconds or more, or another suitable predetermined period of time, which period will also be referred to as the period of inactivity, the processor determines, as shown in block1504, whether the weights are docked in the cradle. The processor determines if the weights are in the cradle using the plate-sensing assembly. The term inactivity, as used herein, implies that during this period no user inputs are provided to the user interface and detected by the processor, nor are any sensor state changes detected and communicated to the processor from the plate-sensing assembly. The inactivity period may be user-configurable, e.g., via the user interface or via an external computing device30with which the base201is communicatively coupled (e.g., the user's smart phone, which may execute an exercise tracking and interface application communicating with the dumbbell/base). In other embodiments, the inactivity period is preprogrammed and configurable only by the manufacturer. If the base201remains inactive for the predetermined inactivity period, and upon determination that the weights are docked in the base, the base transitions to the low power (or sleep) mode, as shown in block1510. Consequently, the U/I transitions to the corresponding state, e.g., the first state1302ofFIG.13, in which the status indicator (e.g., status light1212) is Off. If no weights are detected as docked or present in the base at block1504, the processor determines, at block1506, if a wireless connection has been established (e.g., the base is successfully paired to an external computing device30). If the outcome of the determination at block1506is Yes, then the base201remains in active (also referred to as awake), connected mode as shown at block1508. Otherwise, the base201transitions to the low power (or sleep) mode, as is shown at block1510with the U/I shifts to the corresponding state (e.g., state1302). FIG.16shows a flow diagram of a pairing process1600that may be implemented by a smart base according the present disclosure (e.g., base201). The base may start off in the sleep mode (block1602) and transition from the sleep mode to awake, e.g., responsive to the user pressing the button1210. In embodiments in which the same button is used to invoke different functions, the number of button presses and/or duration of pressing the button may be used to differentiate between and invoke the different functions of the button. For example, the waking of the base may be effected by pressing the button1210a single time. A pairing function may be invoked by pressing and holding the button1210down for a predetermined period of time (e.g., at least 3 seconds). Different other function may be invoked by different other number of button presses or sequence thereof. As previously described, when the base is in sleep mode, the U/I may exist in a state in which the status indicator(s) are off (e.g., state1302ofFIG.13). Upon waking of the base, the processor determines if the base had previously been connected (see block1604), and if the answer is Yes, the base proceed to attempt to locate the external device associated with the previously established connection (see block1606), during which time the U/I exists in the “connecting/pairing” state, (e.g., state1306ofFIG.13). If the device is available, the connection with this device is re-established, and the U/I shifts to the appropriate state, e.g., state1308. If the base has not been previously connected (e.g., a determination of No at block1604), the base transitions to the “awake but not connected” mode (see block1610) and the pairing process1600may be invoked. As previously described, when the base is in the “awake but not connected” mode, the U/I may exist in the associated state, e.g., as shown in block1623, in which the status light is On but has different color than when the base is connected (e.g., a solid white light vs. a solid blue light as shown in block1627). Similarly, if the outcome of block1606is unsuccessful, e.g., the base is unable to locate the device to which a connection was previously established, the base similarly transitions to the “awake but not connected” mode as shown in block1610, and the pairing process1600may be initiate for pairing the base with another device. Thus, the pairing process1600may be used when pairing for the first time or to reset the connection to a new device/pairing. To initiate the pairing process1600from the “awake but not connected” state, the user may press the button1210. In some embodiments, pressing the button1210once while in “awake but not connected” mode invokes the pairing process. In other embodiments, to invoke the pairing process a different number and/or manner of button presses is used, e.g., pressing the button and holding it down of a set period of time (e.g., 3 seconds, 4, second, 5 seconds or more). In yet other embodiments, a dedicated button for invoking the pairing function may be provided. Operation of the button in the manner associated with the pairing function causes the base to become discoverable (see block1612). Pressing the button again while in pairing mode causes the base to exit pairing mode. When establishing connection with a new device, the user may additionally provide input to the device to be connected to, e.g., to confirm that the connection should be accepted/established (e.g., as shown in block1614. Prior to confirming the connection at block1614, the user device displays the available connection (see block1611) to enable the user to select/confirm the paring. If the connection is confirmed (see Yes arrow), the base is successfully paired with the device (block1608), the U/I of the base shifts to the corresponding state (see e.g., e.g., state1308), and optionally a confirmation of the pairing is provided on the display of the user's device (see block1609). If the connection is rejected (see No arrow from block1614), the base returns to the awake but not connected state and if the base remains in this state for the predetermined inactivity period (see block1618), the base returns to sleep mode. FIG.17shows a flow diagram of state transitions from connecting/pairing mode to low power (or sleep) mode. In some instances, the user may decide to exit the connecting/pairing mode before it times out. For example, if the base is in the connecting/pairing mode (block1702), the user may manipulate the button in a predetermined matter (e.g., press the button once while in pairing mode) to cancel pairing. The base201exits pairing mode, as shown in block1704and transitions to the “awake but not connected” mode, with the U/I shifting to the associated state (e.g., state1304ofFIG.13). The base remains in this mode until it times out after a predetermined period of inactivity (e.g., after 20 seconds) unless another action is taken. If no action is taken and the base remains inactive for the predetermined inactivity period, the base transitions to the sleep mode (see block1706) and the U/I shift to the associated state (e.g., state1302). The foregoing discussion has been presented for purposes of illustration and description and is not intended to limit the disclosure to the form or forms disclosed herein. For example, various features of the disclosure are grouped together in one or more aspects, embodiments, or configurations for the purpose of streamlining the disclosure. However, various features of the certain aspects, embodiments, or configurations of the disclosure may be combined in alternate aspects, embodiments, or configurations. Moreover, the following claims are hereby incorporated into this Detailed Description by this reference, with each claim standing on its own as a separate embodiment of the present disclosure. All directional references (e.g., proximal, distal, upper, lower, upward, downward, left, right, lateral, longitudinal, front, back, top, bottom, above, below, vertical, horizontal, radial, axial, clockwise, and counterclockwise) are only used for identification purposes to aid the reader's understanding of the present disclosure, and do not create limitations, particularly as to the position, orientation, or use. Connection references (e.g., attached, coupled, connected, and joined) are to be construed broadly and may include intermediate members between a collection of elements and relative movement between elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and in fixed relation to each other. Identification references (e.g., primary, secondary, first, second, third, fourth, etc.) are not intended to connote importance or priority, but are used to distinguish one feature from another. The drawings are for purposes of illustration only and the dimensions, positions, order and relative sizes reflected in the drawings attached hereto may vary.
57,678
11857828
Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION FIG.1depicts an example mobile weight training system100in accordance with implementations of the present disclosure. The system100includes a collapsible barbell102, a set of weight holders104, a system of filler bags106, and a case108that can double as a weight-bench. The filler bags106can be filled with various readily available materials such as, but not limited to, sand, dirt, water, or gravel, and attached to the weight holders104to create weights for use with the barbell. The weight holders104can be attached to the barbell102by wrapping the weight holders104around the ends of the barbell102or, in some examples, by hanging the weight holders104from the barbell using a handle on the weight holder104(e.g., as shown inFIGS.2D and9C). The system of filler bags106can include several filler bags106of different sizes, each size designed to hold an approximate weight of material (e.g., 5, 10, and 15 pounds). For example, such a mobile weight training system100can include enough weight holders104and filler bags106to produce 270 pounds of weight when filled (e.g., 6 weight holders, 12—15 lb bags, 6—10 lb bags, and 6—5 lb bags), but with a travel weight (e.g., weight of the system with empty filler bags) not much greater that the weight of the barbell102alone. FIGS.2A-2Ddepict various views of an example mobile barbell102in accordance with implementations of the present disclosure.FIGS.2A-2Cshow various configurations of the barbell102, andFIG.2Dshows the barbell102with weight holders104attached. Referring first toFIGS.2A-2C, the barbell102can be broken down into three separate parts (202,204,206) for storage and travel. The parts include two end portions (female end portion202and male end portion204) and a middle portion206. Each end portion202,204has either a male coupling210a(male end portion204) or a female coupling210b(female end portion202). The middle section206has a male coupling210aat one end and a female coupling210bthe opposite end. The couplings210a,210bsecure the end portions202,204of the barbell to the middle portion206. Specifically, to assemble the barbell102, the male coupling210aon the male end portion204fastens to the female coupling210bof the middle portion206, and the female coupling210bof female end portion202fastens to the male coupling210aof the middle portion206. As described in more detail below in reference toFIGS.5,6A, and6B, the couplings210a,210bcan include any of several different coupling mechanisms (e.g., threading, pins, or plunger buttons) to fasten the parts of the barbell102together. FIGS.3A-3C and4A-4Cdepict detail and perspective views, respectively, of each of the parts (portions202,204,206) of the barbell102.FIGS.3A and4Ashow detail and perspective views of the female end portion202. The female end portion202includes a collar212and a hollow cylindrical sleeve214attached coaxially to the female end portion202. Either the collar212, the sleeve214, or both can be attached to the female end portion202such that they are free to rotate around the female end portion202. For example, the collar212and sleeve214can be mounted on bearings (e.g., brass bushings) placed between the female end portion202, and the collar and sleeve214. In some examples, the collar212and the sleeve214can form a single assembly, for example, by affixing the collar212to the sleeve (e.g., by welding or press fitting the two together). In some implementations, the sleeve214and collar212can be formed in one piece. For example, the sleeve214and collar212be machined from one piece of material. In some examples, a seal218(e.g., a gasket or V-seal) can be placed between the female end portion202and the collar212, and/or between the female end portion202and the sleeve214to prevent debris from fouling the bearings and impeding the rotation of the collar212and/or the sleeve214. In some examples, the V-seal is mounted axially on the bar, with a lip in contact with the bushing inside the collar/sleeve assembly to prevent debris from fouling the bushing from the interior side of the collar/sleeve assembly. FIGS.3B and4Bshow detail and perspective views of the male end portion204. The male end portion204includes a collar212and a hollow cylindrical sleeve214attached coaxially to the male end portion204. Either the collar212, the sleeve214, or both can be attached to the male end portion204such that they are free to rotate around the female end portion204. For example, the collar212and sleeve214can be mounted on bearings placed between male end portion204, and the collar and sleeve214. As noted above, in some examples, the collar212and the sleeve214can form a single assembly, for example, by affixing the collar212to the sleeve (e.g., by welding or press fitting the two together). In some implementations, the sleeve214and collar212can be formed in one piece. For example, the sleeve214and collar212be machined from one piece of material. In some examples, a seal218(e.g., a gasket or v-seal) can be placed between the male end portion204and the collar212, and/or between male end portion204and the sleeve214to prevent debris from fouling the bearings and impeding the rotation of the collar212and/or the sleeve214. In some examples, the V-seal is mounted axially on the bar, with a lip in contact with the bushing inside the collar/sleeve assembly to prevent debris from fouling the bushing from the interior side of the collar/sleeve assembly. As shown inFIG.2D, the weight holders104are attached to the sleeves214of the barbell102. Further, the sleeves214can have a diameter similar to industry standard Olympic barbell sleeves of, for example,1and 31/32 inches, such that the barbell102can be used with standard weights in addition to the weight holders104. In addition to serving as a stopper for weights placed on the sleeves214, the collars212can serve as a hanger for additional weight holders104(e.g., as shown inFIG.2D). For example, referring again toFIGS.3A-3B and4A-4B, the collar212has a channel216formed in the outer surface and running along the circumference of the collar212. The channel216can be sized to cradle a handle on the weight holders104, thereby preventing a weight holder104hung from the barbell102from sliding during lifts. In some implementations, as shown inFIG.2B, the barbell102, when fully assembled, has standard Olympic dimensions, for example, 2.2 m (7.2 ft) long and weighing 20 kg (44 lb), however, implementations may vary in weight and length, for example, to suit differing training routines. In some implementations, the barbell102can be slightly longer than an Olympic barbell, for example, to accommodate the longer weight holders104the sleeves214can be extended an appropriate distance, as compared to a standard Olympic barbell. In addition, the two end portions202,204can be fastened together without the middle portion206by, for example, coupling the respective male210aand female210bcouplings of the male204and female202end portions together to form a shorter barbell102, for example, a curl bar (as shown inFIG.2C). Although a barbell102made up of three separate portions (202,204,206) is shown, in some examples, the barbell102can be made of more than three portions to, for example, make the barbell102even more compact for travel and storage. For example, as shown inFIGS.3C and4C, which show detail and perspective views of example middle portions, the middle portion206can be formed from two separate middle portions206a,206b. Each middle portion206a,206bhas both male210aand female210bcouplings. Further, the middle portions206a,206bcan be of different lengths, for example, to permit more adaptability in barbell sizes. In some implementations, the middle portions206a,206bcan be sized such that a woman's Olympic bar (e.g., 2.01 m (6.6 ft) long and weighing 15 kg (33 lb)) can formed using only one of the middle portions, and a men's Olympic bar can be formed using both of the middle portions. FIG.5depicts example threading configurations that can be used as coupling mechanisms for the male210aand female210bcouplings. Threading configurations502and504show threads extending from an end of a barbell portion (e.g., portion204,206) along only a portion the male coupling210a. Further, threading configuration502shows a finer thread pitch than that of threading configuration504. Also, threading configurations502and504represent undercut threading configurations (e.g., a configuration in which the shank of the male coupling has a diameter equal to the pitch diameter of the threads). Although not shown, threading configurations502and504can be modified, in some examples, such that the threaded portion of the male coupling210ais at the distal end (e.g., the end away from the barbell portion204,206). In other words, the unthreaded portion of the male coupling210ais proximate to the barbell portion204,206and the threaded portion of the male coupling210ais at the distal end of the male portion210a. Threading configuration506shows an example threading configuration in which the threads extend along the entire length of the male coupling210a. Further, threading configuration506represents a full-bodied threading configuration (e.g., a configuration in which the shank of the male coupling has a diameter equal to the major diameter of the threads). Although not shown, the female couplings210bare tapped with corresponding thread grooves. FIGS.6A and6Bdepict another example coupling mechanism for the male210aand female210bcouplings. Referring first toFIG.6A,FIG.6Ashows a cross-sectional view of a pin and hole type of coupling mechanism600. The male210aand female coupling210beach have corresponding holes604and606, respectively, which can be aligned when the male coupling210ais inserted into the female coupling210b. The two couplings210a,210bare secured together by pins602inserted through the aligned holes604,606. In some implementations, the holes604in the male coupling210acan be tapped to accept a spring and plunger assembly650. The assembly650includes a threaded body652and a movable plunger654held under spring pressure by a spring (not shown) within the body652. The plunger654can be moved into the body652against the spring pressure. When the assembly650is installed in the male coupling210a, the plunger extends past the outer circumference of the male coupling210aand can lock into a corresponding hole606in the female coupling210b. In some examples, the assembly650can include a thread locking element656(e.g., nylon, thread locking tape, or thread locking liquid). FIG.7depicts an example weight holder104in accordance with implementations of the present disclosure. A front side of the weight holder104is shown inFIG.7. The weight holder104is made of a strong but flexible material, for example, a fabric such as1000Denier Mil-Spec Cordura Nylon or other appropriate high strength fabric. The weight holder104has several sealable chambers702in which weights (e.g., filler bags106) can be inserted. The chambers702have an opening704at one end, and a closure mechanism706. The closure mechanism706can be, for example, a zipper or a flap with a fastening device such as, but not limited to, hook and loop fasteners, snaps, metal or plastic clips. In some examples, each chamber702has a separate closure mechanism (e.g., a separate flap). In some examples, the weight holder104has a single closure mechanism706(e.g., a single flap) that encloses all of the chambers702. The weight holder104has one or more straps708and corresponding strap fastening devices710. The straps708are attached to an end of the weight holder104that is transverse to the orientation of the chambers702, and the strap fastening devices710are attached at an opposite end of the weight holder104, also transvers to the orientation of the chambers702. The straps708and strap fastening devices710are positioned on the weight holder104such that, when the straps708are secured to corresponding strap fastening devices710, the weight holder is wrapped into a hollow cylindrical shape (e.g., seeFIGS.8B and9C), thereby allowing weight holders104to be wrapped around the sleeve214of a barbell102. In some examples, the weight holder is made to lie flat when not wrapped into the cylinder shape, for example, making the weight holder more space-efficient during storage. In some examples, the strap fastening devices710can be hook or loop fasteners corresponding to respective loop or hook fasteners on the straps708. In some examples, the strap fastening devices710can be fastening devices such as, but not limited to, double D-ring loops, buckles, S-hook straps, ladder lock buckles, metal or plastic clips (e.g., corresponding clips on the straps708), or snaps. In some implementations, the weight holder104has three chambers702, and the chambers are oriented on the weight holder such that, when the weight holder104is rolled up and the straps708secured, the weight holder104has a triangular cross-section (e.g., as shown inFIGS.8B and9C). In some examples, design (e.g., the cross section) of the weight holder104(when wrapped) makes the weight holder self-tightening around the barbell sleeves214. In some implementations, each chamber702of the weight holder104is sized to hold fifteen pounds of filler bags106(e.g., 1—15 lb filler bag; 1—10 lb and 1—5 lb filler bag; or 3—5 lb filler bags), with a total fillable weight of 45 lbs. In some examples, the weight holder104includes one or more handles712,714. The handles can be, for example, fabric handles712(e.g., nylon webbing) or molded plastic handles714. FIGS.8A and8Bdepict example filler bags106and an example weight holder104in accordance with implementations of the present disclosure.FIG.8Ashows a back side of the weight holder104and filler bags106being inserted into the weight holder104.FIG.8Bshows an example weight holder104loaded with filler bags106and rolled up to be placed on an end of a barbell102. For example, in order to attach the straps708to the strap fastening devices710, the weight holder104is rolled into a cylindrical shape. More specifically,FIG.8Ashows filler bags106being inserted into the chambers702of a weight holder104. For example, 15 lb filler bags are shown as being inserted into chambers A and B, and a 10 lb and a 5 lb filler bag are shown as being inserted into chamber C, for a total weight of 45 lbs. When a weight holder104is filled with a desired weight of filler bags106, the weight holder104can be rolled into the configuration shown inFIG.8Bby attaching the straps708to corresponding strap fastening devices710. The filler bags106are made of a high strength flexible material such as, for example, 1000 Denier Mil-Spec Cordura Nylon. The filler bags106also have a closure mechanism802, for example, similar to the closure mechanism706of the weight holder104. The closure mechanism802can be, for example, a zipper or a flap with a fastening device such as, but not limited to, hook and loop fasteners, snaps, metal or plastic clips. In some examples, the filler bags106can have a double closure mechanism802. For example, the filler bags can have two overlapping closure mechanisms802of the same (e.g., overlapping flaps with hook and loop fasteners) or different type (e.g., a zipper and a flap with hook and loop fasteners). In some examples, the filler bags106may have a water tight liner and water tight closure802such that the filler bags106can be filled with water. In some examples, the filler bags106can have a handle attached to an outer surface of the bag. FIGS.9A-9Cdepict various methods of attaching weight bags104to a barbell102in accordance with implementations of the present disclosure.FIG.9Ashows one weight holder104attached at each end of the barbell102. For example, the weight holders104are wrapped around the sleeves214of the barbell102. The straps708can be pulled snug and attached to the strap fastening devices710to securely fasten the weight holders104to the barbell102. FIG.9Bshows a barbell102with four weight holders104attached. In some implementations, the straps of the weight holders104are long enough that weight holders104can be wrapped around each other on a barbell sleeve214. The first two weight holders104are attached as described above in reference toFIG.9A. Each of the second two weight holders104are then wrapped around one of the first two weight holders104previously attached to the barbell sleeves214. FIG.9Cshows a barbell102with six weight holders104attached (e.g., a third set of two weight holders104). In this example, a weight holder104is hung on each collar212of the barbell. For example, the weight holders104can be hung on the collars212by placing one of the weight holder handles712in the channel216of the collar212. The collar channel216prevents weight holders104hung in this fashion from sliding during lifts. FIG.10depicts an example barbell scale1000. In some implementation, the scale1000may be attached to or integrated with a barbell102. For example, the scale1000can be attached to or integrated with the collar212on either the male portion204, the female portion202, or both. In some implementations, the scale1000can be a mechanical scale, as shown inFIGS.11A and11B. For example, the scale1000can include a moveable element1102positioned in a notch1104of the collar212. Springs1106are positioned between an inner surface of the notch1104and the moveable element1102. When a weight holder104is hung on the collar212(e.g., as shown inFIG.9C), the weight of the weight holder compresses the springs1106, thereby translating the moveable element1102within the notch1104. A tab1108of the moveable element1102extends through a side surface of the collar212and can serve as a pointer to a calibrated set of weight markings1110on the side of the collar212. In some implementations, the scale1000in pressure sensing device includes an electronic pressure sensor1002in electronic communication with an electronic display device1006, for example, through a detachable wire1004. The electronic display device1006includes one or more processors and a data store storing instructions for processing electrical signals from the electronic pressure sensing device1002and displaying a weight. In some examples, the display device can be a mobile computing device such as, for example, a tablet computer or a smartphone. In such implementations, an application executed by the mobile computing device can process the signals from the electronic pressure sensor and display a weight. In some examples, the electronic pressure sensor1102can be integrated with a mechanical scale, such as shown inFIGS.11A and11B. While a number of examples have been described for illustration purposes, the foregoing description is not intended to limit the scope of the invention, which is defined by the scope of the appended claims. There are and will be other examples and modifications within the scope of the following claims.
19,027
11857829
DETAILED DESCRIPTION OF THE INVENTION The following detailed description is of the best currently contemplated modes of carrying out exemplary embodiments of the invention. The description is not to be taken in a limiting sense but is made merely for the purpose of illustrating the general principles of the invention, since the scope of the invention is best defined by the appended claims. Referring now toFIGS.1through13, the following is an itemized reference number list for the Figures. Any assumed quantities and the naming convention used for the following references of the current embodiment of the invention is not limiting but for the reader to understand the best currently contemplated modes of carrying out exemplary embodiments of the present invention. 20motor powered lifting rack system (or “system”),30platform,40side actuating bench,50central bench,55adjustable bench,60frame,62A-62D uprights,64A-64D crossmembers,66camera,68computer,70bar support,70A bar support transition,70B actuator,70C worm gear,70D worm screw,70E worm screw shaft,70F drive shaft,71bar support latch,72basket,73interface,74motor,80safety bar,80A safety bar transition,80B actuator,80C bevel and worm gear,80D worm screw,80E worm screw shaft,80F drive shaft,80G bevel gear,80H bevel gear shaft,80I bevel gear,80J actuator,80K safety bar transition,81safety bar latch,82safety bar notch,83A-B interfaces,84motor,86safety bar cavity,88safety bar recess,90scissor lifting actuator,100camera frame,110lifter,112outline,114wire model,116reference angle,120barbell and121weight plate. Additionally,80A1-2are distal and proximal ends of80A,80K1-2are distal and proximal ends of80K, and70A1-2are the distal; and proximal ends of70A. Referring now toFIGS.1through12, the present invention may include a system20. The system20may include a frame60having four vertical uprights62A-62D that extend from a platform30. Four horizontal members64A-64D may interconnect the distal ends of the vertical uprights62A-62D, as illustrated inFIGS.1through5andFIG.12. The platform30may be dimensioned and adapted to secure the vertical uprights62A-62D along a supporting surface. The platform30may provide a safety bar recess88extending between each pair of longitudinal uprights (e.g.,62A and62C is one pair of longitudinal uprights). Each safety bar recess88is dimensioned to receive a safety bar80operatively associated with the respective pair of longitudinal verticals uprights in a nested condition. In the nested condition an uppermost portion of said safety bar80is approximately flush with an upper surface of the platform30. Each safety bar80may provide a safety bar notch82for receiving a portion of a barbell120. A latch81may close off an upper portion of the safety bar notch82, thereby preventing a received portion of the barbell120from being lifted out of the notch82. In the nested condition, the safety bar notch82may occupy a space below the upper surface of the platform30. Therefore, a barbell120being supported by the platform30and/or side actuating benches40may be engaged by the notch82as the safety bar80moves from the nested condition to an elevated condition between the platform30and/or side actuating benches40and the distal ends of the associated pair longitudinal uprights62A-62C and62B-62D, respectively. The platform30may provide a central actuating bench50disposed between the two pairs of longitudinal uprights and disposed adjacent to a first pair of latitudinal uprights (e.g.,62A and62B are a pair of latitudinal uprights). The central actuating bench50is movable between a retracted position (seeFIG.1) and an extended position (seeFIGS.2and11). In the retracted position, an upper surface of the central actuating bench50is flush with an upper surface of the platform30. In the extended position the central actuating bench50is adapted to accommodate a recumbent human user. The central actuating bench50may have scissor lift actuator90or other actuation mechanism for moving the central actuating bench50between the retracted and extended positions, wherein the actuation mechanics are powered by the present invention as shown inFIG.11. The central actuating bench50may be adapted to be an adjustable bench55as shown inFIG.9. These adaptations are for seated incline, decline or flat bench presses or other similar movements. It is understood that the central actuating bench50may not be located between the uprights62A-D (as shown inFIGS.1-3and12). Such as but not limited to being mounted on the outside edge of the platform30between the uprights62A-D, elsewhere on the system20or on a wall mounted device separate and next to the system20and may be lower/raised/pivoted, etc. into position for bench presses or other similar movements and when stored away is not in the way of the users to do other movements on the system20. The platform30may provide a side actuating bench40disposed to the outside of each pair of longitudinal uprights. Each side actuating bench40is movable between a retracted position (seeFIG.1) and an extended position (seeFIG.3andFIG.8). In the retracted position, an upper surface of each side actuating bench40is flush with an upper surface of the platform30. In the extended position the side actuating bench40is adapted to accommodate one or more recumbent human users. Each side actuating bench40may have scissor lift actuator90or other actuation mechanism for moving between the retracted and extended positions, wherein the actuation mechanics are powered by the system20. The side actuating benches40may be adapted to lock in a tilted position, as illustrated inFIG.8providing an angle of incident ‘A’ between the upper surface of the bench40and the platform30. The angle of incidence A may also be determined relative to a plane parallel with the platform30, wherein this parallel plane is associated with an initial, non-tilted orientation/position of the upper portion/surface of the bench, as illustrated inFIG.8. The angle of incidence A can range from zero degrees (parallel with the platform) to any angle afforded by the upper portion of the actuating bench (at some point it may contact the platform30). In some embodiments, the angle of incidence may be ninety degrees or more based on the topology of the platform and actuating bench. This is for rolling the barbell120up and down the platform30to reposition it for other lifts as well as deflecting a dropped barbell away from the user. Two actuators80B-J and70B may be disposed in each vertical upright62A-62D. The actuators80B-J and70B may be vertically oriented and in a parallel relationship relative to each other as they extend a substantial length of the respective vertical upright (between the distal end and the proximal end, adjacent the platform30). Each actuator80B-J and70B operatively associates with an actuator interface83A-B and73, respectively, along an outer surface of the respective vertical upright, as illustrated inFIGS.6and7. For each pair of longitudinal uprights, the respective actuator interfaces83A-B and73face each other, as illustrated inFIG.4A. The actuator interface interfaces83A-B and73also extend a substantial length of the respective vertical upright. In some, but not all, embodiments the actuators80B-J and70B may be worm screw and gear jacks with a translation nut or other forms of linear actuators. In some embodiments, the actuator interfaces83A-B and73may be slots in the vertical upright that communicate with the respective worm screw and gear jacks with a translation nut linear actuator. The actuator interfaces83A-B and73may be dimensioned and adapted to receive and operatively associate with a transition80A-K and70A respectively. The safety transition80K may be U-shaped to be received and slide along safety bar actuator interface83A and may be Loop-shaped80A to be received and slide along safety bar actuator interface83B. The support transition70A may be Loop-shaped to be received and slide along a support actuator interface73. The U-shape and Loop-shape complement each other and enable access to the respective actuators80B-J and70B that are spaced apart in a parallel orientation within the same vertical upright. Each transition80A-K and70A may be received in its respective actuator interface83A-B and73by way of the distal end of the respective vertical upright. Each transition80A/80K and70A has a distal end80A1/80K1,70A1and a proximal end80A2/80K2and70A2, respectively. The distal ends80A1/80K1and70A1may have an engagement element or the like dimensioned and adapted to operatively associate with the respective engagement element of the actuators80B-J and70B. In certain embodiments, wherein the actuators80B-J and70B are screw actuators, the distal ends80A1/80K1and70A1may provide a first gear arrangement that engages a second gear arrangement of the screw actuator so that rotation (clockwise or counterclockwise) of the non-travelling screw actuator causes the transition80A/80K or70A to travel linearly up or down the length of the screw actuator. The proximal ends of the transitions80A2/80K2and70A2may be removably or fixedly attached to the safety bar80and a bar support70, respectively. Referring toFIG.5, the horizontal members64A-64D may house a motor74/84(electric, pneumatic, or the like) with driving drive shafts70F/80F that couple with the worm screw shafts70E/80E, worm screws70D/80D, worm gears70C, worm/bevel gear80C, bevel gears80G, bevel shaft80H, bevel gears80I, and actuators80B-J and70B in each vertical upright62A-62D so that the actuators80B-J and70B rotate, which in turn selective moves (i.e., causes travelling of) the respective transitions80A/80K or70A. The present invention contemplates the actuators80B and70B (in a shared vertical upright) being independently rotatably relative to each other. It being understood that other methods to apply a force to lift the bar support70and safety bars80are contemplated by the present invention, such as block and tackle pulley systems, hydraulics, counterweights, other jack screw systems, linear actuators or belt systems. It is understood that the motor74/84, drive shafts70F/80F, worm screw shafts70E/80E, worm screws70D/80D, worm gears70C, bevel shafts80H, bevel gears80G/80I, worm/bevel gears80C need not be housed in the horizontal members64A-64D, they may be housed in the platform30as shown inFIG.9, in the uprights62A-D or any combination of locations housed on or inside the system20. Additionally, the motor74/84and drive shafts70F/80F could be separate from the system20or a motor74/84could couple and directly engage the actuators80B-J,70B to reduce the number of components for the system20. One embodiment of the present invention may have two motors74/84that independently actuate the bar supports70and safety bars80relative to each other. One motor74may cause the translation of the bar supports70by engaging the drive shafts70F, that rotate the worm screw shafts70E, that rotate the worm screws70D, that engage the worm gears70C, which rotates70B clockwise or counterclockwise which in turn selective moves (i.e., causes travelling of) the respective transition70A. One motor84may cause the translation of the safety bars80by engaging the drive shafts80F, that rotate the worm screw shafts80E, that rotate the worm screws80D, that engage the worm/bevel gears80C, which rotates the bevel gears80G and bevel shaft80H clockwise or counterclockwise, which rotates the bevel gears80I clockwise or counter clockwise, to engage the rotation of80B-J that in turn selective moves (i.e., causes travelling of) the respective transition80A and80K as illustrated inFIG.4A-CandFIG.5. It is understood that one motor can power the actuators80B-J,70or scissor lifting actuators90by use of a more complex gear box system (not shown) in the system20. Referring theFIG.4A, the present invention may embody a bar support70that connects to the proximal end of each bar support transition70A. The bar support70may include but is not limited to J-hooks. The bar support70define a basket portion72for supporting a portion of the barbell120. The basket portion72has a depth. A basket latch71may close an upper portion of the basket portion72, thereby preventing a received portion of the barbell120from being lifted out of the basket portion72. It should be clear that the bar support70may not be J-hooks, but can include any structure (e.g., flat, spherical, cylindrical, etc.) that can engage various fitness equipment (e.g., dumbbells, free weights, resistance bands, etc.) or portions of the human user themselves. Thus, the bar support70can be “universal”. Additionally, it should be clear that the safety bar80may not be rectangular bars, but can include any structure (e.g., flat, spherical, cylindrical, etc.) that can engage various fitness equipment (e.g., dumbbells, free weights, resistance bands, etc.) or portions of the human user themselves for “spotting” or safety purposes. Thus, the safety bar80can be “universal”. The bar support70and the safety bar80vertically align (since they both connect to the same vertical uprights). The distal ends of each safety bar80may provide cavities86into which the depth of the basket portion72can nest. Note, that the safety bar80need not be in the nested condition for this to happen. Though when this does happen in the nested condition, then an upper portion of the basket portion72may be approximately flush with the upper surface of the platform30(as the basket portion72occupies space within the safety bar80so that, like the safety bar notch82, the basket portion72may receive/engage a portion of a barbell120that is supported on the upper surface of the platform30and/or side actuating benches40and/or the safety bar80. The uprights62A-D may also serve as a stop for the barbell120should the barbell roll up or down the platform30and/or the side benches40and/or safety bar80. This may keep the barbell120from rolling off the system20. The uprights62A-D, safety bars80and the side actuating benches40may encompass (along with cameras66, a computer68, and the like, which are disclose in more detail below) a synergistic system to control the location of the barbell120on the system20. That system may keep the barbell120from rolling of the system20. The actuating side benches40and safety bars80may also assist the lifter with a “lift off” from the bar supports70or back to the bar supports70, should the lifter request it to do so. The central bench50may be used as a surface to squat on like a box for box squats. For that use of the system20, the user would have the barbell120placed in the notch82that is raised by the safety bar80to the user's height to begin the squat and the central bench50actuated to the appropriate anthropometry of the user to squat to. The user would lift the weight off the notch82while facing the computer68to squat to the central bench50. During the squat the notch82would be lowered by the safety bars80so that they would not get in the way of the user to squat to the central bench50. Then when the user squats to the central bench50the user would stand back up while being spotted by the safety bars80and/or side actuating benches40until the barbell120is placed back in the notch82at the top of the squat. The frame60may support cameras66and electrically connected computers68to facilitate command and control of the selectively movable safety bars80, side actuating benches40, central bench50, adjustable bench55and bar supports70. The computer68may have a display and user interface for further enabling the command and control. For instance, the computers68may be configured, based on the pixels captured by the connected cameras66, to selective move the bar support70to provide the required spacing for the barbell120relative to a person recumbently disposed on the central actuating bench50for bench presses or other similar movements. As a default, the latitudinally opposing bar support70are kept in alignment. It is understood that the cameras66may not be mounted on the uprights62A-D, crossmembers64A-D or the frame60. The cameras66may be mounted on their own camera frame100as shown inFIG.9and/or mounted separate from the system20. Additionally, it is understood that the computer68may be mounted elsewhere on or inside the system20such as but limited to on the camera frame100as shown inFIG.9and/or mounted separate from the system20. It is understood that there may be a combination of alternative configurations of the system20. Such as but not limited to keeping the crossmembers64A-D but moving the linear actuator motors, shafts, screws and gears to be housed inside the platform30or in the uprights62A-D. Additionally, this includes changing the camera66placement locations, camera66angles that look up/down towards the lifter or platform30, where the cameras66are focused to look on the system20, camera mount100placement locations and the number of cameras66. It is understood that the side actuating benches40may be additionally modified to recess lower than the surface of the platform30to allow deficit movements such as a deficit deadlift and the like. It is understood that the latches on the safety bar notch81and the bar support70may be additionally modified to electronically open and close by the computer or other electronic systems. It is understood that all the linear actuating systems described in this application may be modified to be an all-manual system powered by the human user. It is understood that when the barbell120is placed on the bar support70in the basket72or the safety bar notch82and secured by the latches71or81, respectively, the barbell120may be prevented from moving out of those locations and/or from keeping the barbell120from rotating while secured to use the barbell120as a pull-up bar that is adjustable to the user's height by use of the actuating systems decried in this application. Computer System Command and Control Applications Referring toFIGS.13, the computer(s)68may assist the lifters in workout programming, exercise selection, counting and verifying repetitions of movement were properly executed in real time via use of the cameras66. The computer(s)68may assist the lifter(s) in the loading/unloading of a barbell120via use of the cameras66. The computer(s)68may assist in the transition, use, spotting, teaching, coaching/technique correction of the following movements and variations with a loaded or unloaded barbell120in real time via use of the cameras66(including lifting/lowering and repositioning of a loaded or unloaded barbell120to or from the platform30or side actuating benches40or central bench50or adjustable bench55or bar supports70or safety bars80): Press; Bench Press; Squat; Deadlift; Clean; Jerk; Snatch: variations of those movements and the like. The computer(s)68may execute voice commands and/or independently set the safety bars80, the bar supports70, the central actuating bench50, adjustable bench55and side actuating benches40to different heights for better use and safety for each lifter based on their anthropometry. The computer68may use the cameras66to help provide weight verification by line of sight of weights121and/or the barbell120. Each weight121is of a different thickness, diameter and/or color and knows which weight121weighs a certain amount. The computer68may use the cameras66to help assist the transition, use, spotting, teaching, coaching/technique movement pattern correction by using the bodies reference angles116based on anthropometry as shown inFIG.10A-Cin real time. For example, the angle between the lifter's back116and the platform30can provide enough data if the lifter is setting their back correctly before the start of a deadlift. The computer68may use the cameras66to “see” the lifter/barbell that are linked up the computer68that controls the system20to better assist the lifter. The computer68may use cameras66to “see” if the bar latches/locks71/81are in use or not to help prevent the system20from operating if they are improperly used to prevent damage to the system20. Cameras66may be placed at the following locations relative to the frame60: one front middle; one rear middle; two on the sides in the middle; and one on each side, whereby 360-degree visual coverage of the lifter and barbell120are captured. Cameras66may be hung down from mounts on the ceiling of the frame60. The cameras66may be disposed approximately eight feet off the surface of the platform30when hanging from the ceiling mounts that are approximately nine feet from the surface of the platform30. The cameras66may be in fixed and/or moveable positions. The cameras66may be oriented to look downward and towards the center of the platform30. The computer68may be configured to provide logistics support by knowing what load and position of the barbell120is on the system20as well as on other systems20in a network of systems20, wherein the computer68can communicate to lifter(s) where to go next for their current and future lift(s) and what weight to use to minimize the que of the system20. For example, if a user had a plurality of systems20within a few feet of each other with different/similar loads on each barbell120on each system20the computers68will calculate where each person should go and what to do based on their workouts and tell them that in real time to reduce the que on the system20. The system20may “talk” to lifters via Bluetooth or other wireless communication technology via earphones, speakers, or the like on the frame60, other software applications or “smart” devices. One system20may “talk” to other system20or other “smart” devices via WIFI, LAN or Bluetooth in the network of system20. The system20may be capable of LAN, WIFI and/or ethernet wiring and/or being connected to the internet for live coaching by trainers, updates to the system20and/or transmit data to other computers68, a central computer or data storage and processing systems. The system20may be plugged into a power outlet, use batteries or other power storage and retrieval systems, have USB outlets, antennas or receivers. The computer68may be configured to keep track of the wear and tear of the system20for engineering updates and spread the wear and tear amongst systems20in a network of systems20. The computer68may be configured to provide weight verification on the barbell120so that the lifter is using the correct weight and prevents misloading of the barbell120. The computer68may be configured to provide advance lifting support by reducing the perceived load on the barbell120by providing an opposite force on the barbell120. For example, a barbell120may weigh forty-five pounds but a lifter can only lift and lower thirty-five pounds on the bench press. So, an upward force of ten pounds can be applied, via the linear actuators80B-J for the safety bars80and/or actuators90for the side actuating benches40, to make the barbell120“weigh” thirty-five pounds. The computer68may be configured to allow for the use of more advanced lifting techniques such as eccentric overload training. For example, the lifter puts three hundred and fifteen pounds on the barbell120for bench press but only can bench three hundred pounds. The lifter can lower the three hundred and fifteen pounds but when pressing the weight back up the system20can provide the fifteen or more pounds of force—via the linear actuators80B-J for the safety bars80and/or the side actuating bench actuators90for the side actuating benches40—necessary to help the lifter rack the weight. The computer68may be configured to selective move and lock the central actuating bench50, side actuating benches40, adjustable bench55, bar supports70and safety bars80for assisting the lifter in concentric, eccentric or isometric weight-lifting regimens. The computer(s)68may facilitate a tilt function of side actuating benches40that may be used for repositioning the barbell120along the platform30or side actuating benches40or safety bars80as shown inFIG.8. By tilting the side actuating benches40clockwise or counterclockwise the barbell120is going to roll in that direction with or without weight on the barbell120. This tilt function may be controlled by a computer68that knows the degree of tilt of the side actuating bench40. The degree of tilt may be changed by using one of the actuators90to raise or lower one part of the side actuating bench40more than the other part of the same side actuating benches40and thus a tilt is created. A computer68may know the position of the barbell120via use of the cameras66and may tilt the side actuating benches40to control the location of the barbell120via use of the actuators90. Each platform portion or associated benches40can be independently controlled to tilt to greater control the rolling of the barbell120to position. Similarly, the platform30and side actuating benches40can be also used to help “catch” or “absorb” a dropped barbell120to help dampen the sound and keep the barbell120from bouncing away or towards the lifter(s). The basket latch71and the notch latch81may be manually controlled by the lifter(s) and may visually verify their securement by use of the cameras66and the computer68display. The computer68may verify the use of the basket latch71and notch latch81by the cameras66so that no damage to the system20will occur if improperly used. The computer(s)68may control all motors and actuators of the system20and cameras66. The computer(s)68may also process and relay data to other machines, computers, devices or a central computer in the network. The computer(s)68may collect data on every lifter on weights used, movements executed, time spent unloading/loading the barbell, resting and time spent on each lift including warm up and work sets in real time. As well as positioning of the equipment on the system20when the system20is in use. The computer(s)68may also collect data on how long each lifter is in que and time spent entering, leaving, and getting prepared for the lift or any other data wanted by trainers, researchers or the users themselves. The safety bars80with the notch82may give the system20the capability to perform as a mono-lift. For example, a person wants to squat two hundred and twenty-five pounds with the mono-lift function. They would load the barbell120to two hundred and twenty-five pounds while the barbell120is positioned in the notch82. They would position themselves under the barbell120as needed for the squat and then start the squat by standing up and move the barbell120off the notch82, the notch82may be lowered by the safety bars80controlled by the camera66and computer68system. Then the user would squat without having to move their feet into a new position. Then at the bottom of the repetition of the squat the safety bar and notch82may be raised by the camera66and computer68system so that the lifter can rack the weight back into the notch82at the end of the repetition. The safety bars80and side actuating benches40may complement each other. They may provide more lifting forces and different ways to spot/assist a lifter. The safety bars80may provide a “track” when raised slightly more than the platform30or side actuating benches40thus keeping the barbell120from rolling off the system20. Because self-locking worm screw and gear linear actuators70B and80B-J may be used for the uprights62A-D and actuators90, each height of the bar support70, safety bar80, side actuating benches40, central bench50, adjustable bench55is simultaneously self-locking. This makes the system safer in the event of power loss, weight dropping on the components and extends the life of the motors74/84and actuators90powering the system20by putting less stress on the motors74/84and actuators90when loads are moved, lifted, lowered or dropped on the system20. The computer68and camera66system may use the lifters anthropometry by approximations of the user's body to determine the correct reference angles and distances between joints and other parts of the human body for a lifter to configure themselves to lift the barbell120, other weights or devices. As illustrated inFIG.10A-C, the lifter110may have their image taken by the cameras66and simplified to an outline112and wire model114for analysis of and by use of the computer68to configure the user to lift the barbell120with proper technique or use other fitness tools. The points inFIG.10Bare numbered to portray a simple example of where some key locations/nodes but not all locations of the human body are for calculating reference angles and proper technique. Nodes1and2represent locations of the cervical spine C1 and C7 respectively and the rest of the numbers are odd numbered when viewed from the right to represent the right side of the user. The even numbers not shown represent the left side of the same location. Node3is the right shoulder joint and4would be the left shoulder joint. Node5is the right elbow and node6would be the left elbow, etc. Node7is the right wrist, node9is the center of the right hand, node11is the right hip joint, node13is the right knee, node15is the right ankle, node17is the right heel and node19is the right toes. The computer68and camera66system may approximate these locations to determine the distances between them and each other to finally calculate the lifters anthropometry and references angles for the lifts to execute with proper technique in real time. The present invention contemplates a database of wireframe model exercise routines against which the computer(s)68may compare captured wireframe models to in order to make a determination of a proper or improper positioning of one or more of a user's body portions. The voice commands by the computer68to the user may be in the voice of the user, a generic “robotic” voice or other voices such as but limited to their trainers or a celebrity. The cameras66and computer68system may record the movements of a trainer or a user performing a workout in real time with which it can have users perform for their workout in real time for local or long-distance training on their own systems20. The system20may help the users of said workout routine with coaching of those movements in real time. The cameras66and computer68system may recognize other fitness tools such as dumbbells, exercise bikes, row machines, jump ropes or any other fitness tool and may train people how to use them the same way it would train people how to lift the barbell120. This includes bodyweight movements. The camera66and computer68system may “spot” the user via visual cues. For example, when the system20is configured for a user to bench press and the safety bars80and/or side actuating benches40are raised to a position slightly below the user on the central bench50such that if the barbell120is dropped, they may raise and contact the barbell120and not the user. For example, when the user is on the central bench50and takes the barbell120off the bar supports70the computer68and camera66system may know that first movement and position is the start and end of the movement. It may know the barbell120will touch the user's chest at the bottom of the movement before pressing the barbell120back to the initial position because the computer68might have a database of exercises and knows what to expect with that lift or other lifts or movements. While the user is performing the bench press and if they drop the barbell120intentionally, due to injury, muscle failure or can't press the weight of their chest or experience muscle failure during any other part of the movement or other reasons the camera66and computer68may “see” that and react by raising the safety bars80and/or side actuating benches40to contact the barbell120and assist the user to rack the weight back into the bar supports70. This process may be very similar to how another human user would “spot” another lifter using visual ques or body language or voice commands. This includes the user using voice commands or body language such as saying “help” or shaking their head “no” to get the system20to assist the lifter. This spotting process is not limited to the bench press but any movement with which the safety bars80and/or side actuating benches40are needed to “spot” the user. It is understood when the present invention is training users it may consider the users limitations such as but not limited to range of motion, previous or current injury(s), etc. for training purposes. It is understood when the barbell120becomes wedged in-between the bar supports70and safety bars80the system20may “see” that and prevent damage to the system20. It is understood that the priority of the system20may be the health of the user and not damage to the system20. Platform, Bar Support, and Safety Bar Specifications The following dimensions and specifications of the system20are given so the reader has a general sense of the relative size of the system20. Many aspects of the system20may change. The system20may be significantly larger or smaller than what is specified. Platform30base dimensions may be approximately eight feet wide and approximately nine feet long, height of base is determined by space needed for scissor lifts and motors/actuators, drive shaft, support trusses etc. for the scissor lifting actuators90or other actuating devices, but overall, the ceiling (top surface of the crossmembers64A-D) may be approximately nine feet from the surface of the platform30. The central actuating bench50supporting surface may be approximately ten inches wide and may be approximately forty-eight inches in length. The actuating bench50may extend to approximately twenty inches above the platform30. The side actuating benches40may raise approximately five feet from the platform30and their supporting surface may be approximately twenty-eight inches wide and may be approximately one hundred and four inches in length. The longitudinal spacing of the vertical uprights may be approximately one hundred inches. The latitudinal spacing of the vertical uprights may be approximately forty eight and one-half inches. The vertical uprights62A-62D may be approximately nine feet in length. The safety bar80may be approximately two inches wide, 96 inches in length. The notches82may be approximately mid-length along the safety bar80. The motors74/84, actuators90, worm gears70C, worm screws70D/80D, bevel gears80G/80I, worm/bevel gear80C, drive shafts70F/80F, worm screw shafts70E/80E, bevel shafts80H and connecting mechanisms may also be housed in the platform30or reengineered to be in the uprights62A-D. It is understood that the motor74/84and drive shafts70F/80F may be separate from the system20. The wiring for the cameras66and computers68may be inside the horizontal members64A-64D as well as the vertical uprights62A-62D or inside the platform30. The system20may have a terminal where lifters can manually control aspects of the system20. The terminal may be located on the outward facing side of one vertical upright approximately five feet off the platform30. A method of using the present invention may include the following. The system20disclosed may be provided. A lifter would place the barbell120on the bar supports70, in the basket72without additional weight on the barbell120. The barbell120is secured with the basket latch71so the barbell120does not come off the bar supports70while adjusting the barbell120height or loading the barbell120with weight by way of operating the linear actuators70B via the computer68command and control functionality. To adjust the barbell120height the user would selectively operate the motor74accordingly. After adjusting the barbell120height and loading weight on the barbell120the basket latches71may be moved to an unlocked condition so the lifter can lift the weight. Also, the lifter-user may lift, by way of the actuated bar supports70, the barbell120that is supported by the platform30and/or side actuating bench40through utilizing the nested position of the safety bar and its cavities86, which is occupied by the basket portion72of the bar support70. As used in this application, the term “about” or “approximately” refers to a range of values within plus or minus 10% of the specified number. And the term “substantially” refers to up to 90% or more of an entirety. Recitation of ranges of values herein are not intended to be limiting, referring instead individually to any and all values falling within the range, unless otherwise indicated, and each separate value within such a range is incorporated into the specification as if it were individually recited herein. The words “about,” “approximately,” or the like, when accompanying a numerical value, are to be construed as indicating a deviation as would be appreciated by one of ordinary skill in the art to operate satisfactorily for an intended purpose. Ranges of values and/or numeric values are provided herein as examples only, and do not constitute a limitation on the scope of the described embodiments. The use of any and all examples, or exemplary language (“e.g.,” “such as,” or the like) provided herein, is intended merely to better illuminate the embodiments and does not pose a limitation on the scope of the embodiments or the claims. No language in the specification should be construed as indicating any unclaimed element as essential to the practice of the disclosed embodiments. In the following description, it is understood that terms such as “first,” “second,” “top,” “bottom,” “up,” “down,” and the like, are words of convenience and are not to be construed as limiting terms unless specifically stated to the contrary. It should be understood, of course, that the foregoing relates to exemplary embodiments of the invention and that modifications may be made without departing from the spirit and scope of the invention as set forth in the following claims.
37,878
11857830
DETAILED DESCRIPTION OF THE INVENTION In the figures an embodiment of a physical training apparatus is shown. The apparatus as a whole is indicated by reference numeral1. The training apparatus1is portable and can be attached to a support structure. InFIG.2is shown by way of example that the apparatus is attached to a lamppost200. It must be understood that the training apparatus1in the shown embodiment includes a support3with which it can typically be attached by tensioning bands15to a pole type of structure, such as the lamppost200, but also trees etc. However, also other supports, adapted to attach the apparatus to another structure, e.g. door posts, walls and ceilings are conceivable and are considered to be comprised within the concept of the present invention. The apparatus1comprises a housing2, which can be coupled to the support3by a coupling, which will be described in more detail further below. The housing2can pivot with respect to the support3, about a pivot axis4which in the example shown inFIGS.1and2extends in a vertical direction. However, the apparatus1can also be used in a mode in which the pivot axis is not vertical. In one possible practical mode the apparatus may for example be mounted such that the pivot axis4extends substantially horizontal. In a practical embodiment the housing2comprises shells9formed from plastics material or aluminium. The shells9can be made by injection moulding. InFIG.4is shown a view in which one of the shells9is removed. The shells9are placed opposite each other and connected by screws10. The training apparatus1in general includes a flywheel mechanism, which will be explained in more detail further below, and a pull band7that is wound around a spindle8of the flywheel mechanism (cf.FIG.3). The pull band7has on its free end a handle5, which in the embodiment shown in the figures is connected to the pull band7by a carabiner53. During exercise the user can pull the band7via the handgrip5, and set the flywheel mechanism in motion. The housing2has a guiding passage6through which the band7extends towards the spindle (seeFIGS.1and3). Ideally the tensioned band7and the guiding passage towards the spindle8are aligned. However, since the housing2can pivot with respect to the support3around the pivot axis4, the housing2can adjust its own position and swivel to an orientation that the housing2and the band7are aligned when the user pulls the band7a bit sideways. The pull band7is attached to the spindle8as is shown inFIG.10. The spindle8, which is shown in a cross section, is formed having a substantially cylindrical outer surface12, which constitutes the wind/unwind portion of the spindle on which the pull element7can be wound unwound. The axial ends of the wind/unwind portion12of the spindle8are delimited by radial collars32integrally formed on the spindle. In a preferred embodiment the diameter of the cylindrical surface12is 15 mm, but this may vary in practise between 10 mm and 16 mm. The spindle8is preferably injection moulded in aluminium. However, also other materials and processes are conceivable to manufacture the spindle. At the wind/unwind portion a gap11is generally extending diametrically through the spindle8. The gap11is tapering from a wide end to a narrow end. At the end of the pull band7a loop13is formed. To assemble the pull band7and the spindle8the loop13is folded such that it becomes a relatively flat state in which it can be passed through the gap11from the narrow end to the wide end of the gap11. When the loop13during the assembly extends beyond the outer surface at the wide end of the gap11, a key body14can be inserted in the loop13. When the band7is pulled again the key body14, which preferably has a tapering shape corresponding to the narrowing shape of the gap11is pulled in the gap and secures the loop13in the gap of the spindle8. To release the pull band7from the spindle8, for example in the event that a user wishes to replace a worn pull band7with a new one, he can lever the loop13and the key body14out of the gap11through the wide end of the gap11by a screw driver or another suitable tool, after which the key body14can be removed and the loop13can be pulled out from the narrow end of the gap11. The loop may be an integrally formed portion of the pull band7as is shown inFIG.10. It is also possible to provide a loop13′ at the end of the pull band7′ which is made of a separate band, which is then attached, e.g. stitched to the pull band7, as is shown inFIG.11. This has the advantage that for the band for the loop13′ another material can be used than for the pull band7′. One option is to make the loop13′ as shown inFIG.11from an elastic band, which provides the advantage that when the pull band7is completely unreeled, the loop13′ deforms elastically such that the shock at the end of the stroke of the pull band7′ gets absorbed by the elastic loop13′ and no peak load is transferred via the band7′ to the user holding the handle5. InFIGS.12a-12cis shown a spindle adapter16which can be placed on the spindle8to increase the diameter of the surface on which the pull band7is wound. The spindle adapter16comprises a reel17which is arranged around the spindle8. The reel17includes two reel halves18that are placed around the spindle8and then interconnected such that it can be retrofitted to an existing apparatus1without having to disassemble the spindle8from the housing2. In the embodiment shown inFIGS.12a-12cthe reel halves18are connected on one end by a hinge19. In a particular embodiment shown the reel17can be made from plastic by means of injection moulding, and the hinge19is formed integrally on the reel halves18as a living hinge or film hinge. Opposite the hinge19a recess20is formed in the reel surface21on which the pull band7is to be reeled. In the closed state of the reel17as is shown inFIG.12bthe recess20forms a passage for the pull band7, which is secured to the spindle8in the way as is shown inFIG.10or11. The spindle8includes a spindle end portion22. In the embodiment shown in the figures the spindle8has two spindle end portions22. A flywheel23can be mounted on each one of the spindle end portions22as can be seen inFIG.5. In general the apparatus thus comprises two of the flywheels23, each one mounted on one of the respective spindle end portions22. The flywheels23are fastened to the respective spindle end portions22by respective fastening knobs34as will be explained further below. The flywheel23is positioned against a stop24located on the spindle8, which stop is visible inFIG.6a. The spindle end portion22has is a possible embodiment substantially a polygonal cross section. InFIG.6ais visible that it has a hexagonal cross section. The flywheel23has a central aperture30with a corresponding shape which fits over the spindle end portion22. Also other shapes suitable for coupling the spindle end portion22to the flywheel23in a form-fitting manner in the rotational direction are possible such as for example a shape as is shown inFIGS.13A and13B. Therein the first set25′ of projections26′ and the second set27′ of projections have a sort of lobed shape. The flywheel has a similar lobed shaped central opening, typical for torx shapes known from screw heads, which fits over the sets25′ and27′ of projections. A first set25of one or more radial projections26, in the example three projections (see for exampleFIG.6b), are formed on the spindle end portion22. The central aperture30of the flywheel23has one or more recesses in its outer contour corresponding with the pattern of the one or more radial projections26of the first set25such that the flywheel23can be moved beyond the one or more radial projections26and against the stop24, which situation is visible inFIG.5andFIG.9. A second set27of one or more radial projections28are formed on the spindle end portion22. The second set27of radial projections28adjoins the stop24as is visible inFIG.6a. When the flywheel23is arranged against the stop24, the projections28of the second set27are received in the recesses of the aperture30of the flywheel23as is visible inFIGS.5and9. The second set27of projections28provides a form fit and thus an interlocking between the flywheel23and the spindle end portion22in the rotational direction, because the projections28of the second set27are received in the recesses of the aperture30of the flywheel23. The spindle8is supported rotatably in the housing2by means of bearings31. The bearings31conveniently are roller bearings. The spindle8has a pair of radial collars32formed on it. The respective collars32constitute a stop for the respective bearings31on the spindle8. The collars32delimit between them the wind/unwind portion of the spindle8. On the opposite side the bearings31are locked in by a bearing support portion29of the housing2. The respective bearing support portions29of the housing2are in the embodiment shown in the figures an integral part of the shells9of the housing2. A fastening knob34is releasably coupled to the spindle end portion22to lock the flywheel23on the spindle end portion22. The fastening knob34is adapted to engage the flywheel23and force it against the stop24located on the spindle8. The fastening knob34has an engagement opening35centrally in the knob34. The engagement opening35is adapted to receive the spindle end portion22. In the specific embodiments shown in the figures the fastening knob34comprises an interior body41, including the engagement opening35in the centre, and an exterior cap39which encloses the interior body41. The exterior cap39covers, in a mounted state, the spindle end portion22of the spindle8. The exterior cap39is coupled to the interior body41by snap fingers43snapping behind the edge of coupling openings44in the interior body41. As can be seen inFIGS.7aand7bin one embodiment the engagement opening35of the fastening knob34is defined by a radially inwardly extending flange36having one or more recesses37such that the recesses37of the flange36can be aligned with the radial projections26of the spindle end portion22. When the spindle end portion22is inserted in the engagement opening35and when the projections26are moved beyond the flange36, the recesses37can be misaligned with the radial projections26of the first set25by rotation of the knob34relative to the spindle end portion22. Thereby a sort of a bayonet catch is formed. The flange36has a rear side having raised formations38. In use, when the spindle end portion22is moved through the receiving aperture35of the knob34, the first set25of projections26on the spindle end portion22move beyond a height of the raised formations38and when the recesses37are misaligned with the projections26, the raised formations38move beyond the protrusions. Thus, when the knob is released by the user, projections26behind the raised portions38and the latter form a retaining stop for the projections26. The fastening knob34includes a spring element which resiliently forces the flywheel23against the stop24. InFIG.7bis visible that in one embodiment the spring element is formed by an inner portion of the interior body41having integrally formed resilient tongues40extending from an inside surface42of the body41. The interior body41may conveniently be made of a plastics material such as POM by means of injection moulding and the tongues40may be formed in one piece therewith. However the tongues may also be separate pieces, e.g. of plastic or metal which are assembled with the body interior body41. The interior body41may also be a metal piece, for example, while the exterior cap39is made of plastic, or also metal. Another embodiment is shown inFIGS.6aand6d. In this embodiment is made the spring element is a multiwave spring45that is assembled with the interior body41. Multiwave springs45are known per se and have the advantage of providing a constant force independent of the compression of the spring. The spring element40,45forces the projections26of the first set25into engagement with the flange36. The bayonet catch is a convenient coupling type to provide a quick coupling by a consecutive translation and rotation of the fastening knob34on the spindle end portion22, and a quick release by a rotation and translation of the knob34relative to the spindle8. Variations in the thickness of the flywheel23are absorbed by the spring elements40or45. Another fastening structure for securing the knob34on the spindle8may be like is shown inFIGS.6b-6e. In this embodiment the flange36′ has a pitch such that a sort of screw thread is formed. The projections26can be inserted through recesses37′, and then the knob34can be rotated, such that the knob34is tightened against the flywheel23by the projections26sliding along the flange36′ having a pitched surface38′. InFIG.6ecan be best seen that at an end of the pitched surface38′ a recess39′ is formed in which the projection26can snap in. The snap connection fixes the knob with respect to the spindle such that the spindle can be rotated by rotating the knob to wind or unwind the pull band7on or of the spindle8. The snap action is however such that when the rotation of the flywheel23or the spindle8is blocked the snap connection between projection26and recess39′ can be released for removing the knob34of the spindle end portion22. The housing2has on either of the lateral sides a covering lid50, which is shown inFIG.1where it is in a closed state, and inFIG.3where it is in an open state. The covering lid50is in the embodiment shown, coupled to the remainder of the housing2by means of a hinging structure. Preferably the cover50is made of a transparent plastic material. The covering lid50can be opened as is shown inFIG.3to exchange the flywheel23. It is noted that inFIG.3there is no flywheel23mounted yet on the spindle end portion22. As mentioned in the above, the housing2is releasably coupled to the support3, preferably by a quick release coupling. An example of such a quick release coupling is illustrated inFIG.8. The quick release coupling shown inFIG.8comprises a sliding locking body48which is partly received in and guided in an accommodation space51formed in the housing2. A biasing spring49is provided to force the locking body out of the accommodation space51, such that a free end48A can extend into a locking space52in the support3and thereby interlock the support3and the housing2. The support3is provided with a release button46, in this embodiment on an upper side47of the support3. The release button46extends into the locking space52and abuts the free end48A of the locking body48. When the release button46is pushed in, it pushes the locking body48into the accommodation space51against the biasing force of the spring49. When the locking body48is entirely pushed out of the locking space52, the housing can be detached from the support3. Thus the housing2can be decoupled from the fixed world, so as to carry it to another location. Also for exchanging the flywheel23it might be convenient to remove the housing2from the fixed support3. The locking body48is preferably also functioning as a pivot pin defining the pivot axis4. On the underside of the support3a similar release mechanism may be arranged. In use the user can attach the support3to a pole200or other support by the tensioning bands15. Then the housing2can be coupled to the support3, either with or without the flywheels23mounted to the spindle8. If the flywheels23are not yet mounted, a suitable set of flywheels can be selected by the user for performing a certain exercise. The selection can for example be made between different flywheels23having different thicknesses and/or different weights. Via the quick release knobs34the flywheels23can be quickly mounted or replaced by the user before a new exercise is started. In case the pull band7is not yet fully wound on the spindle or spindle adapter, before use of the apparatus1, the knob34may be conveniently used to turn the spindle8to wind the band7on the spindle8or on the spindle adapter17. During exercise the user grips the handle5and pulls the pull band7out of the slot6of the housing2. The spindle8and the flywheels23are thereby set in rotation. When the pull band7is fully unwound or unreeled, the flywheel mechanism remains rotating due to the inertia of the flywheels23, thereby winding the pull band7on the wind/unwind portion of the spindle8or the spindle adapter17again. During this return stroke the user then experiences a pull force which he/she has to brake by using muscle force. At the end of the return stroke the rotation is zero for one instant and then is reversed in direction when the user pulls the pull band7with the handle5again. This cycle can be repeated as long as desired. Variations in the physical exercise can be made by varying the flywheels23. According to the invention, and the possible embodiments according to the invention shown in the figures, a compact physical training apparatus is provided, which still allows a great range in exercises in view of intensity, speed, forces etc.
17,049
11857831
DETAILED DESCRIPTION Referring toFIG.1, an improved strength-training apparatus10may include an elongate shoulder bar12with a central support pad14from which two handles16project. In use, a lifter preferably places the support pad over the shoulders behind the lifter's neck, with the lifter's hands grasping the handles16firmly. In a preferred embodiment, the handles16may include a knurled or roughened surface17to prevent the lifter's grip from slipping when using the bar12. Those of ordinary skill in the art will recognize that some embodiments of the bar12may not include the support pad14and/or the handles16. The strength training apparatus10includes, at each end, a respective weight sleeve18upon which a desired amount of weight may be loaded upon the strength-training apparatus10. The weight sleeve18is preferably configured to be inserted into one or more Olympic-sized weights, which typically have central apertures of approximately two inches in diameter. Referring also toFIG.2, the weight sleeve18is laterally offset from the shoulder bar12by an adapter20having an angular adjustment interface24allowing rotation of the weight sleeve18about the longitudinal centerline of an inserted shoulder bar12, and a radial adjustment interface22which allows the weight sleeve18to be adjusted in a radial direction relative to the axis of rotation of the angular adjustment interface24. The combination of the radial adjustment interface22and the angular adjustment interface24allows a lifter to position the weights on the weight sleeve18in any one of a plurality of locations relative to the lifter's shoulders. In this manner, lifters can adjust the weights to a position that relieves stress while performing squats, or to simulate different types of squats (front, back, safety, etc.) with a single bar supported on the lifter's shoulders behind the neck. In some embodiments, adjustment of either or both of the radial adjustment interface22and the angular adjustment interface24may allow continuous adjustment to any position desired throughout a range of adjustment. In other embodiments, the radial adjustment interface22and/or the angular adjustment interface24may allow incremental adjustment to one of a plurality of fixed positions within a range of adjustment. For example, as shown inFIG.2the radial adjuster22may be an elongate member26that defines a plurality of recesses28extending in an axial direction away from the axis of rotation of the angular adjustment interface24, each of the plurality of recesses28capable of releasably and securely retaining a distal end of a weight sleeve18. Preferably, each of the recesses28of the radial adjustment interface is sized to securely retain a weight sleeve configured to be inserted into an Olympic-sized weight. In some embodiments, such as the one shown inFIG.2, the plurality of the recesses28form a contiguous slot. In such embodiments, the radial adjustment interface22may be configured to hold the weight sleeve18at a selective one of a plurality of axial positions approximately 1.5 inches from each other. ThoughFIG.2shows a radial adjuster with four such positions, other embodiments may include more or less such incremental positions. Similarly, the angular adjustment interface24may in some embodiments have a plurality of fixed angular positions about which the adapter20may rotate. In a preferred embodiment, for example, the angular adjustment interface24includes an aperture formed by a periphery defining a plurality of notches, each notch configured to engage an edge of a polygonal-shaped distal end30of the shoulder bar12, which inFIGS.1and2is shown as a hexagonal protrusion. Preferably, in this embodiment, the aperture includes sufficient notches to allow adjustment of the hexagonal end of the shoulder bar to at least six locations. As shown inFIG.2, there are twelve notches, allowing adjustment to twelve independent angular orientations, though one of ordinary skill in the art will recognize that any desired number of orientations may be achieved. Preferably, the angular adjustment interface24includes an end cap having a threaded connection that may be matingly received in a bore within the weight bar12to secure the angular adjustment interface24in the desired position. As can be seen inFIG.3, the combination of the radial adjustment interface22and the angular adjustment interface24allows a lifter to use the adapter20to position weights in any of a multitude of positions around the lifters body, extending 360-degrees around the weight bar12and many at different radial distances from the weight bar12, thereby allowing a lifter position weights at an optimal location for spinal safety, while achieving a number of different types of squats, e.g. a front squat, a back squat, a safety squat etc. Those of ordinary skill in the art will appreciate however, that different adapters20may limit the angular or radial orientation of the adapter20relative to the weight bar12to a desired range less than 360 degrees. It will be appreciated that the invention is not restricted to the particular embodiment that has been described, and that variations may be made therein without departing from the scope of the invention as defined in the appended claims, as interpreted in accordance with principles of prevailing law, including the doctrine of equivalents or any other principle that enlarges the enforceable scope of a claim beyond its literal scope. Unless the context indicates otherwise, a reference in a claim to the number of instances of an element, be it a reference to one instance or more than one instance, requires at least the stated number of instances of the element but is not intended to exclude from the scope of the claim a structure or method having more instances of that element than stated. The word “comprise” or a derivative thereof, when used in a claim, is used in a nonexclusive sense that is not intended to exclude the presence of other elements or steps in a claimed structure or method.
6,032
11857832
DETAILED DESCRIPTION Various example embodiments (a.k.a., exemplary embodiments) will now be described more fully with reference to the accompanying drawings in which some example embodiments are illustrated. In the figures, the thicknesses of lines, layers and/or regions may be exaggerated for clarity. Accordingly, while example embodiments are capable of various modifications and alternative forms, embodiments thereof are shown by way of example in the figures and will herein be described in detail. It should be understood, however, that there is no intent to limit example embodiments to the particular forms disclosed, but on the contrary, example embodiments are to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure. Like numbers refer to like/similar elements throughout the detailed description. It is understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent,” etc.). The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art. However, should the present disclosure give a specific meaning to a term deviating from a meaning commonly understood by one of ordinary skill, this meaning is to be taken into account in the specific context this definition is given herein. LIST OF COMPONENTS Body Stretching System1Limb Stretching Unit100Main Body110Strap Aperture111Gripping Structure120Grip Body121First Arm121aSecond Arm121bHandle122Ratchet Actuator123Release Buttons124Limb Strap125Connection Straps126Strap Connectors127Strap Fasteners128Ratchet Assembly130Ratchet Coil131Ratchet Strap132Ring Connector133Ring Fastener134Limb Attachment140Limb Ring141Rack Assembly200Rack Body210Rings220Base230Pulley300Pulley Ring310Limb Stretching Unit400Main Body410Strap Aperture411Gripping Structure420Grip Body421Handle422Ratchet Actuator423Release Buttons424Limb Strap425Strap Connector426Ratchet Assembly430Ratchet Coil431Ratchet Strap432Ring Connector433Ring Fastener434 FIG.1illustrates a side perspective view of a body stretching system1, according to an exemplary embodiment of the present general inventive concept. FIG.2illustrates an isometric top view of a limb stretching unit100, according to an exemplary embodiment of the present general inventive concept. The body stretching system1may be constructed from at least one of metal, plastic, wood, and rubber, etc., but is not limited thereto. The body stretching system1may include a limb stretching unit100, a rack assembly200, and a pulley300, but is not limited thereto. The limb stretching unit100may include a main body110, a gripping structure120, a ratchet assembly130, and at least one limb attachment140, but is not limited thereto. Referring toFIG.2, the main body110is illustrated to have a rectangular prism shape. However, the main body110may be rectangular, circular, conical, triangular, pentagonal, hexagonal, heptagonal, octagonal, or any other shape known to one of ordinary skill in the art, but is not limited thereto. The main body110may include a strap aperture111, but is not limited thereto. The strap aperture111may facilitate movement of a strap in or out of the main body110. The gripping structure120may include a grip body121, a handle122, a ratchet actuator123, a plurality of release buttons124, a limb strap125, a plurality of connection straps126, a plurality of strap connectors127, and a plurality of strap fasteners128, but is not limited thereto. The grip body121may be disposed on at least a portion of the main body110. More specifically, the grip body121may be perpendicularly disposed away from an edge of the main body110with respect to a direction. Moreover, the grip body121may have a U-shape, such that a first arm121aand a second arm121bof the grip body121extend away from the main body110. The handle122may be disposed on at least a portion of the grip body121between the first arm121aand the second arm121b. The handle122may facilitate gripping thereof. The ratchet actuator123may be movably (i.e. slidably) disposed on at least a portion of the grip body121, within grooves of the grip body121, between the first arm121aand the second arm121b, and distanced from the handle122. The ratchet actuator123may move from an original position (e.g., away from the handle122) to at least partially toward the handle122in response to an application of force (e.g., squeezing, pushing, pulling) thereto. Conversely, the ratchet actuator123may move from the handle122toward the original position based on a spring bias (e.g., a spring) that resets the ratchet actuator123to the original position. Each of the plurality of release buttons124may be disposed on at least a portion of the first arm121a, the second arm121b, the handle122, and/or the ratchet actuator123. In other words, at least one of the plurality of release buttons124may be disposed on at least a portion of the first arm121aand/or the second arm121b. Also, each of the plurality of release buttons124may operate by toggle and/or holding down by the user for as long as needed. The limb strap125may be removably connected to at least a portion of the grip body121and/or the main body110. The limb strap125may be removably connected via a fastener (e.g., a hook and loop faster, an adhesive) to a limb of a user, such as a wrist, an arm, an ankle, and/or a leg. As such, the limb strap125may secure the limb of the user therein, while connected to the limb. The plurality of connection straps126may be removably connected to at least a portion of the grip body121and/or the main body110. More specifically, each of the plurality of connection straps126may connect at a first end to at least a portion of the limb strap125to connect the limb strap125to the grip body121. Each of the plurality of strap connectors127may receive and/or connect to at least one of a second end of the plurality of connection straps126. In other words, each of the plurality of connections straps126may removably connect at the second end to at least one of the plurality of strap connectors127, such that the plurality of strap connectors127may prevent the plurality of connection straps126from falling off the grip body121and/or the main body110. Referring again toFIG.2, each of the plurality of strap fasteners128are illustrated to be a carabiner. However, each of the plurality of strap fasteners128may be a clamp, a clasp, an adhesive (e.g., tape, glue), a magnet, and/or any combination thereof, but is not limited thereto. The plurality of strap fasteners128may removably connect the first end of each of the plurality of connection straps126to the limb strap125. The ratchet assembly130may include a ratchet coil131, a ratchet strap132, a ring connector133, and a ring fastener134, but is not limited thereto. The ratchet coil131may include a wheel with angled teeth to connect to a cog connected to the ratchet actuator123. Additionally, the ratchet coil131may be movably (i.e. rotatably) disposed within at least a portion of the main body110. Thus, the ratchet coil131may rotate in a first direction (i.e., clockwise) or a second direction (i.e., counterclockwise) in response to the ratchet actuator123being moved (e.g., squeezed). However, the ratchet coil131may be prevented from moving in the second direction or the first direction due to the cog. In other words, the ratchet coil131may operate as a ratchet that rotates in only one direction until released. Alternatively, the ratchet coil131may include a sensor, a motor, and a power source (e.g., a battery) to detect optimal stretching of the user, such that the ratchet coil131may automatically increase a tension level without actuation by the user. Furthermore, the ratchet coil131may release in response to depressing at least one of the plurality of release buttons124. More specifically, the ratchet coil131may be spring biased to recoil. The ratchet strap132may be disposed at a first end on at least a portion of the ratchet coil131and extend through the strap aperture111. Thus, the ratchet strap132may retract within the main body131in response to the ratchet actuator123being moved. However, the ratchet strap132may loosen and be extended in response to depressing at least one of the plurality of release buttons124. The ring connector133may be disposed on at least a portion of a second end of the ratchet strap132. Referring again toFIG.2, the ring fastener134is illustrated to be a carabiner. However, the ring fastener134may be a clamp, a clasp, an adhesive (e.g., tape, glue), a magnet, and/or any combination thereof, but is not limited thereto. The ring fastener134may removably connect to an attachment. The limb attachment140may be constructed as a strap, a belt, and a door jamb knob, but is not limited thereto. The limb attachment140may include a limb ring141, but is not limited thereto. The limb attachment140may be removably connected to at least a portion of the ring fastener134. The limb attachment140may be removably connected via a fastener (e.g., a hook and loop faster, an adhesive) to the limb of the user. As such, the limb attachment140may secure the limb of the user therein, while connected to the limb. The limb ring141may be disposed on at least a portion of the limb attachment140. The limb ring141may connect the limb attachment140to the ring fastener134. The rack assembly200may include a rack body210, a plurality of rings220, and a base230, but is not limited thereto. The rack body210may be highly durable, similar to a squat rack, a weight rack, and/or any other type of rack used for exercising. The rack body210may be connected to at least a portion of a surface, such as a wall. Alternatively, the rack body210may be disposed on at least a portion of a ground surface, such that the rack body210stands on the ground surface. The rack body210may support other exercises thereon, such as a pull-up. Each of the plurality of rings220may be another attachment point and/or inflection point, such as hooks. Each of the plurality of rings220may be disposed on at least a portion of the rack body210and distanced from each other. The base230may be disposed on at least a portion of the rack body210to support the rack body210on the ground surface. The pulley300may include a pulley ring310, but is not limited thereto. The pulley300may be removably connected to the rack body210using the pulley ring310. Moreover, the pulley300may be removably connected to the limb stretching unit100, such as the ratchet strap132. In other words, the ratchet strap132may be threaded through the pulley300. During use, the limb strap132may be connected to a first limb of the user and the limb attachment140may be connected to a second limb of the user based on which limbs and parts of a body of the user, the user intends to stretch. Subsequently, the user may squeeze the ratchet actuator123to retract and/or tighten the ratchet strap132, such that the ratchet strap132increases the tension level on the first limb, the second limb, and/or the body of the user. More specifically, the ratchet coil131may stretch the first limb away from and/or towards the second limb, the rack assembly200, and/or other stationary object by increasing the tension level between the first limb and/or the second limb in response to squeezing the ratchet actuator123. Alternatively, the ratchet strap132may increase the tension level in response to depressing a tension button disposed on the main body110. The ratchet actuator123and/or the handle122may be released to maintain stretching without use of hands. At least one of the plurality of release buttons124may be depressed to loosen and/or decrease the tension level, such that the user may detach the limb strap132and/or the limb attachment140. Therefore, the body stretching system1may facilitate stretching by the user without assistance from another person. Additionally, the body stretching system1may increase flexibility of the user, such as for diving and/or gymnastics. Also, the body stretching system1may improve recovery for the user without requiring significant time and/or cost. FIG.3illustrates an isometric top view of a limb stretching unit400, according to another exemplary embodiment of the present general inventive concept. The limb stretching unit400may include a main body410, a gripping structure420, and a ratchet assembly430, but is not limited thereto. Referring toFIG.3, the main body410is illustrated to have a rectangular prism shape. However, the main body410may be rectangular, circular, conical, triangular, pentagonal, hexagonal, heptagonal, octagonal, or any other shape known to one of ordinary skill in the art, but is not limited thereto. The main body410may include a strap aperture411, but is not limited thereto. The strap aperture411may facilitate movement of a strap in or out of the main body410. The gripping structure420may include a grip body421, a handle422, a ratchet actuator423, a plurality of release buttons424, a limb strap425, and at least one strap connector426, but is not limited thereto. The grip body421may be movably (i.e., pivotally, rotatably) disposed on at least a portion of the main body410via a hinge. More specifically, the grip body421may be perpendicularly disposed away from an edge of the main body410with respect to a direction. The grip body421may pivot in a first rotational direction or a second rotational direction opposite with respect to the first rotational direction. For example, the grip body421may pivot at least forty-five degrees. As such, the ratchet actuator423may be gripped from different positions. Moreover, the grip body421may have a U-shape. The handle422may be disposed on at least a portion of the grip body421and/or the main body110. The handle422may facilitate gripping thereof. The ratchet actuator423may be movably (i.e. slidably) disposed on at least a portion of the grip body421, within grooves of the grip body421, and distanced from the handle422. The ratchet actuator423may move from an original position (e.g., away from the handle422) to at least partially toward the handle422in response to an application of force (e.g., squeezing, pushing, pulling) thereto. Conversely, the ratchet actuator423may move from the handle422toward the original position based on a spring bias (e.g., a spring) that resets the ratchet actuator423to the original position. Each of the plurality of release buttons424may be disposed on at least a portion of the grip body421, the handle422, and/or the ratchet actuator423. Also, each of the plurality of release buttons424may operate by toggle and/or holding down by the user for as long as needed. The limb strap425may be removably connected to at least a portion of the grip body421and/or the main body410. The limb strap425may be removably connected via a fastener (e.g., a hook and loop faster, an adhesive) to a limb of a user, such as a wrist, an arm, an ankle, and/or a leg. As such, the limb strap425may secure the limb of the user therein, while connected to the limb. The at least one strap connector426may be inserted into and/or connect to an aperture within the main body410. In other words, the at least one strap connector426may removably connect to the main body410, such that the at least one strap connector426may prevent the limb strap425from falling off the grip body421and/or the main body410. The ratchet assembly430may include a ratchet coil431, a ratchet strap432, a ring connector433, and a ring fastener434, but is not limited thereto. The ratchet coil431may include a wheel with angled teeth to connect to a cog connected to the ratchet actuator423. Additionally, the ratchet coil431may be movably (i.e. rotatably) disposed within at least a portion of the main body410. Thus, the ratchet coil431may rotate in a first direction (i.e., clockwise) or a second direction (i.e., counterclockwise) in response to the ratchet actuator423being moved (e.g., squeezed). However, the ratchet coil431may be prevented from moving in the second direction or the first direction due to the cog. In other words, the ratchet coil431may operate as a ratchet that rotates in only one direction until released. Alternatively, the ratchet coil431may include a sensor, a motor, and a power source (e.g., a battery) to detect optimal stretching of the user, such that the ratchet coil431may automatically increase a tension level without actuation by the user. Furthermore, the ratchet coil431may release in response to depressing at least one of the plurality of release buttons424. More specifically, the ratchet coil431may be spring biased to recoil. The ratchet strap432may be disposed at a first end on at least a portion of the ratchet coil431and extend through the strap aperture411. Thus, the ratchet strap432may retract within the main body431in response to the ratchet actuator423being moved. However, the ratchet strap432may loosen and be extended in response to depressing at least one of the plurality of release buttons424. The ring connector433may be disposed on at least a portion of a second end of the ratchet strap432. Referring again toFIG.3, the ring fastener434is illustrated to be a carabiner. However, the ring fastener434may be a clamp, a clasp, an adhesive (e.g., tape, glue), a magnet, and/or any combination thereof, but is not limited thereto. The ring fastener434may removably connect to an attachment. It is important to note that the limb stretching unit400may replace and/or be used instead of the limb stretching unit100based on a preference of the user. The present general inventive concept may include a body stretching system1, including a limb stretching unit100, including a main body110, a gripping structure120disposed on at least a portion of the main body110to removably connect to a first limb of a user, and a ratchet assembly130disposed within at least a portion of the main body110and connected to the gripping structure120to removably connect to a second limb of the user and stretch the first limb away from the second limb by increasing a tension level between the first limb and the second limb in response to moving the gripping structure120, and a rack assembly130removably connected to the limb stretching unit to facilitate stretching. The gripping structure120may include a grip body121, a handle122disposed on at least a portion of the grip body121to facilitate gripping thereof, a ratchet actuator123disposed on at least a portion of the grip body121to stretch the first limb away from the second limb in response to moving the ratchet actuator123toward the handle122, and a plurality of release buttons124disposed on at least a portion of the grip body121to release the ratchet assembly130in response to being depressed. The ratchet actuator123may be spring biased to return to its original position. The gripping structure120may further include a limb strap125removably connected to at least a portion of the grip body121to removably connect to the first limb of the user, such that the limb of the user remains connected to the grip body121after the handle122is released. The ratchet assembly130may include a ratchet coil131disposed within at least a portion of the main body110to rotate in a first direction in response to moving the gripping structure120, and rotate in a second direction only in response to being released, and a ratchet strap132disposed on at least a portion of the ratchet coil131to connect to the second limb of the user. The ratchet coil131may include a sensor and a motor to detect optimal stretching of the user, such that the ratchet coil automatically increases the tension level without actuation by the user. Although a few embodiments of the present general inventive concept have been shown and described, it will be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the general inventive concept, the scope of which is defined in the appended claims and their equivalents.
21,528
11857833
DESCRIPTION OF EMBODIMENTS The present disclosure will be explained hereinafter through embodiments according to the present disclosure. However, the below-shown embodiments are not intended to limit the scope of the present disclosure specified in the claims. Further, not all of the components/structures described in the embodiments are necessarily indispensable for solving the problem. For clarifying the explanation, the following description and the drawings are partially omitted and simplified as appropriate. The same reference numerals (or symbols) are assigned to the same elements throughout the drawings and redundant explanations thereof are omitted as appropriate. First Embodiment An exercise apparatus according to an embodiment is a foot-pedaling exercise apparatus by which a user performs a foot-pedaling exercise. An exercise apparatus100according to this embodiment will be described with reference toFIGS.1and2.FIGS.1and2are side views of the exercise apparatus100. Note that, for clarifying the explanation, the following description is given while using an XYZ 3D (three-dimensional) orthogonal coordinate system. Specifically, the +X direction is the forward direction; the −X direction is the rearward direction; the +Y direction is the upward direction; the −Y direction is the downward direction; the +Z direction is the leftward direction; and the −Z direction is the rightward direction. The front-rear direction, the left-right direction, and the up-down direction are directions based on the direction of a user U. The exercise apparatus100is one in which the movable ranges of ankle joints can be adjusted. In the following description, the rotational direction of an ankle joint about the Z-axis is referred to as a plantar/dorsi-flexion direction and the angle thereof is referred to as a plantar/dorsi-flexion angle. More specifically, a direction in which the toe of a foot FT points downward is referred to as a plantar-flexion direction, and a direction in which the toe points upward is referred to as a dorsiflexion direction. As shown inFIG.1, the exercise apparatus100includes a main-body part20, links30, a crank40, and tilt tables50. A chair10is provided behind the exercise apparatus100. A user U performs a foot-pedaling exercise while sitting on the chair10. Therefore, the chair10serves as a sitting part on which the user U sits. Note that the chair10may be provided integrally with the exercise apparatus100(i.e., provided as a part of the exercise apparatus100), or may be provided as separate equipment. For example, the chair10may be a chair present in an institution where the user U is present, the user's house, or the like. That is, the user U or his/her assistant may place such a chair10behind the exercise apparatus100. Note that, in the exercise apparatus100, the components attached to the main-body part20are symmetrical in the left-right direction. InFIG.2, in order to distinguish the components on the left side of the main main-body part20from those on right side thereof, the components on the left side are indicated by a suffix “L” and those on the right side are indicated by a suffix “R”. For example, inFIG.2, the left tilt table50is referred to as a tilt table50L, and the right tilt table50is referred to as a tilt table50R. Similarly, the left link30and the left pedal31are referred to as a link30L and a pedal31L, respectively, and the right link30and the right pedal31are referred to as a link30R and a pedal31R, respectively. Similarly, the left foot FT is referred to as a left foot FTL, and the right foot FT is referred to as a right foot FTR. Note that, in the following description, when the left and right components are not distinguished from each other, the suffixes L and R are omitted. The main-body part20rotatably holds the crank40. For example, a rotation shaft21is provided in the main-body part20. The crank40is connected to the rotation shaft21. The crank40rotates about the rotation shaft21. The main-body part20may include a resistive load member that gives a load to the rotational movement of the crank40. Further, the main-body part20may include a gear or the like that changes the amount of the load. The main-body part20may be fixed to a floor surface. Each of the links30includes a pedal31and a sliding wheel35. The crank40is connected to the front end of each of the links30, and the sliding wheel35is connected to the rear end of the link30. The crank40and the links30are rotatably connected to each other. For example, each of the links30is attached to the crank40with a bearing or the like interposed therebetween. The pedal31is attached to the middle of the link30. The pedal31is a step (a footrest) on which the user U puts his/her foot FT. The user U, who sits on the chair, puts the feet FT on the pedals31. The sliding wheel35is attached to the link30through a rotation shaft (an axle). That is, the link30rotatably holds the sliding wheel35. The sliding wheel35serves as a moving member that moves on an inclined surface51of the tilt table50(In this specification, the meaning of the term “sliding” includes movements in which the sliding wheel35moves on the surface while rotating thereon). The user U puts his/her feet FT on the pedals31and performs a foot-pedaling exercise. That is, the user U moves his/her knee joints and the hip joints so that the user U presses the pedals with his/her feet FT. In this way, the crank40rotates about the rotation shaft21. Further, the angle between each of the links30and the crank40changes according to the rotation of the crank40. That is, the relative angle of each of the links30with respect to the crank40changes according to the rotation angle of the crank40(which is also referred to as a crank angle). Further, the sliding wheel35moves in the front-rear direction while remaining in contact with the inclined surface51. In this way, the crank40and each of the links30are rotated in such a manner that the pedal31moves along an elliptical trajectory according to the foot-pedaling motion. Note that the pedal31, the sliding wheel35, the link30, the crank40, and the tilt table50are provided for each of the left and right feet FT of the user U. That is, the pedal31, the sliding wheel35, the link30, the crank40, and the tilt table50are provided on each of the left and right sides of the main-body part20. The pedal31R, the sliding wheel35R, the link30R, the tilt table50R, and the like provided on the right side of the main-body part20correspond to the right foot FTR of the user U. The pedal31L, the link30L, and the tilt table50L provided on the left side of the main-body part20correspond to the left foot FTL of the user U. The cranks40are attached to the rotation shaft21of the main-body part20in such a manner that the phases thereof for the left and right feet FT are opposite to each other. That is, the rotation angle of the crank40for the left foot and that of the crank40for the right foot are shifted from each other by 180°. The user U performs a foot-pedaling exercise by stretching and bending the left and the right legs in an alternating manner. The sliding wheel35is attached to the lower end of the link30. The sliding wheel35has a wheel that slides on the inclined surface of the tilt table50. The tilt table50has the inclined surface which is inclined so that the tilt table50becomes higher toward the rear thereof. The sliding wheel35performs a reciprocating movement in the X-direction (the front-rear direction) according to the rotational movement of the link30. As shown inFIG.1, while the user U performs a foot-pedaling motion by stretching the right leg and bending the left leg, the sliding wheel35on the right side moves forward and the sliding wheel35on the left side moves rearward. As shown inFIG.2, while the user U performs a foot-pedaling motion by stretching the left leg and bending the right leg, the sliding wheel35on the left side moves forward and the sliding wheel35on the right side moves rearward. The height of the sliding wheel35changes along the inclined surface of the tilt table50. The inclined surface of the tilt table50becomes higher toward the rear thereof. That is, the tilt table50becomes an upslope for the sliding wheel35that is moving rearward. Therefore, while the sliding wheel35is moving rearward, the position of the sliding wheel35is gradually raised. On the other hand, while the sliding wheel35is moving forward, the position of the sliding wheel35is gradually lowered. The angle of the link30is determined according to the height of the sliding wheel35. Note that the angle of the pedal31disposed in the link30is restricted according to the height of the sliding wheel35. That is, when the sliding wheel35is raised, the pedal31rotates in the plantar-flexion direction. When the sliding wheel35is lowered, the pedal31rotates in the dorsiflexion direction. Therefore, it is possible to adjust the movable range of the plantar/dorsi-flexion angle of the ankle joint according to the inclination angle of the tilt table50. It is possible to adjust the movable range of the plantar/dorsi-flexion angle of the ankle joint according to the rotation angle of the crank40. This feature will be described hereinafter with reference toFIGS.3and4.FIGS.3and4are side views schematically showing the configuration of the exercise apparatus100.FIG.3shows a configuration of the exercise apparatus100in which the tilt table50is provided, andFIG.4shows a configuration thereof in which no tilt table50is provided. InFIG.3, the height of the sliding wheel35changes along the inclined surface51of the tilt table50. The angle of the link30changes according to the height of the sliding wheel35. Since the foot FT is put on the pedal31disposed in the link30, the joint angle of the foot FT changes according to the angle of the link30. As the sliding wheel35moves rearward, the sliding wheel35is raised and the ankle joint rotates in the plantar-flexion direction. Further, as the sliding wheel35moves forward, the sliding wheel35is lowered and the ankle joint rotates in the dorsiflexion direction. According to this embodiment, it is possible to adjust the movable range of the ankle joint in the plantar/dorsi-flexion direction according to the inclination angle of the tilt table50. That is, each user U can perform a foot-pedaling exercise at an ankle-joint angle(s) suitable for that user U. In contrast, inFIG.4, since no tilt table50is provided, the height of the sliding wheel35is constant. That is, even when the sliding wheel35moves rearward, the height of the sliding wheel35does not change. Therefore, in the configuration shown inFIG.4, it is difficult to adjust the movable range of the ankle joint in the plantar/dorsi-flexion direction on a user-by-user basis. In this embodiment, since the tilt table50, on which the sliding wheel35moves, is provided, the movable range in the plantar/dorsi-flexion direction can be easily adjusted. That is, it is possible to set an optimum movable range according to the user U. Specifically, by making the tilt table50movable in the front-rear direction, it is possible to change the relation between the position of the sliding wheel35in the X direction and the height of the sliding wheel35. In this way, it is possible easily change and adjust the movable range. For example, it is possible to adjust the ankle-joint angle in the plantar-flexion direction by moving the tilt table50forward. Further, the ankle-joint angle is adjusted in the dorsiflexion direction by moving the tilt table50rearward. For example, in the case of an elderly user, the tilt table50may be set so that the movable range of the ankle joint is reduced. It is possible to reproduce the plantar/dorsi-flexion movement of an ankle similar to the motion thereof during actual walking, and therefore to reproduce a motion similar to that performed in actual walking in rehabilitation. The ankle is dorsiflexed when the knee is extended (i.e., the leg is stretched) in the swing-leg state, and the ankle is plantar-flexed in the second half of the stance-leg state. Further, when the swing leg is switched, the ankle is immediately dorsiflexed. By using the tilt table, it is possible to reproduce the motion of the ankle performed during actual walking by the exercise apparatus100. Further, it is possible to determine which region of the plantar-flexion region or the dorsiflexion region of the ankle is mainly moved. For example, assume an example case where the user U is a patient who feels a pain when his/her ankle is dorsiflexed and feels no pain when the ankle is plantar-flexed. Although it is difficult for this user U to perform a dorsiflexion motion, he/she can easily perform a plantar-flexion motion. Therefore, the user U can move the ankle joint within a range in which he/she feels no pain. Accordingly, the user U can perform rehabilitation without anxiety. The movable range of the ankle-joint angle when the tilt table50is moved forward or rearward will be described hereinafter in detail with reference toFIG.5.FIG.5is a side view schematically showing the main part of the exercise apparatus100. InFIG.5, the rotation shaft21about which the crank40rotates relative to the main-body part20is referred to as a rotation shaft A, and the rotation shaft at the connecting part between the crank40and the link30is referred to as a rotation shaft B. Further, the axle of the sliding wheel35is referred to as a rotation shaft C. Further, the distance from the rotation axis A to the front end of the tilt table50in the X direction is represented by L. It is assumed that a horizontal floor surface52is provided in front of the inclined surface51. It is assumed that when the distance L is smaller than Lmin, the sliding wheel35moves on the inclined surface51in the whole range of crank angles. When the distance L is larger than Lmax, the sliding wheel35moves on the horizontal floor surface52in the whole range of crank angles. That is, when the distance L is larger than Lmax, the configuration of the exercise apparatus100is the same as the configuration in which no the tilt table50is provided (i.e., the configuration shown inFIG.4), and the height of the sliding wheel35is constant at all times. When the distance L is neither smaller than Lmin nor larger than Lmax, the sliding wheel35moves on the inclined surface51in a part of the range of crank angles and moves on the horizontal floor in the remaining part of the range of crank angles. Note that Lmin and Lmax are determined according to the lengths of the crank40and the link30. In an XY-plane view, the distance L becomes Lmin when all the rotation shafts A, B and C are located on one straight line; the length AC is minimized (AC=BC−AB) (which is indicated by the positions of B′ and C′ inFIG.5); and the sliding wheel35is in contact with the front end of the tilt table50. In the XY-plane view, the distance L becomes Lmax when all the rotation shafts A, B and C are located on one straight line; the length AC is maximized (AC=BC+AB); and the sliding wheel35is in contact with the front end of the tilt table50. Here,FIGS.6to11show results of simulations that are performed under the condition that the inclination angle of the tilt table50is set to 24.5°.FIGS.6and7show results in cases where the sliding wheel35moves on the horizontal floor surface, i.e., where the distance L is larger than Lmax.FIGS.8and9show results in cases where the sliding wheel35moves on the tilt table50in a part of the range of crank angles, i.e., where the distance L is neither smaller than Lmin nor larger than Lmax.FIGS.10and11show results in cases where the sliding wheel35moves on the tilt table50at all times, i.e., where the distance L is smaller than Lmin. Each ofFIGS.6,8, and10is a graph showing changes in the hip-joint angle, the knee-joint angle, the ankle-joint angle, and the angle of the pedal31. In each ofFIGS.6,8and10, the horizontal axis indicates the crank angle. Each ofFIGS.7,9and11shows a trajectory of a representative point of the step (the pedal31) on the XY-plane. Note that results of simulations that were performed under the condition that Lmax=425.5 mm and Lmin=259.8 mm are shown. InFIGS.6and7, results of simulations in which L=450 mm are shown. InFIGS.8and9, results of simulations in which L=350 mm are shown. InFIGS.10and11, results of simulations in which L=250 mm are shown. As shown inFIGS.6to11, it is possible to change the movable range of the ankle joint by moving the tilt table50forward or backward. In other words, it is possible to change the position of the tilt table50in the front-rear direction according to the state of the ankle joint of the user U. The user U can effectively perform a foot-pedaling exercise. For example, in the case of a rehabilitation patient or an elderly person, the movable range of the ankle joint may be smaller than that of healthy people. For such users, the position of the tilt table50in the front-rear direction is determined so that the movable range is reduced. Further, even for the same user U, it is possible to adjust the movable range according to the condition of the user U. For example, it is possible to adjust the movable range according to the level of the recovery of a rehabilitation patient. Note that, in the above description, the movable range of an ankle joint was adjusted by moving the tilt table50in the front-rear direction. However, the method for adjusting the movable range is not limited to this example method. For example, a plurality of tilt tables50having different inclination angles may be prepared. It is possible to adjust the inclination angle by using the plurality of tilt tables50that can be replaced with one another. A user U, an assistant, or the like can adjust the movable range by replacing the tilt table50with one having an appropriate inclination angle. The ankle-joint angle can be adjusted in the plantar-flexion direction by replacing the tilt table50with one having a larger inclination angle. The ankle-joint angle can be adjusted in the dorsiflexion direction by replacing the tilt table50with one having a smaller inclination angle. Alternatively, the tilt table50may be divided into a plurality of blocks, and the movable range may be adjusted by changing the number of blocks and/or the size of blocks. For example, the movable range can be adjusted by stacking a plurality of blocks on top of one another. Needless to say, the movable range may be adjusted by combining two or more of the above-described adjustment methods with each other. Note that although the inclination angle of the tilt table50is constant in the drawings, the inclination angle of the tilt table50may be changed as desired. For example, the inclined surface51may be formed as a curved surface such as a concave surface or a convex surface. That is, in an XZ-plane view, the inclined surface51may not be a straight line, but may be a curved line such as a curved line according to a quadratic function. In this way, it is possible to set the movable range of an ankle-joint angle more finely. Further, at least one of the front-rear position, the inclination angle, and the shape of the left tilt table50L may be different from that of the right tilt table50R. For example, in the case of a patient having an injury in his/her left leg, it is more difficult for the patient to move the ankle of the injured left leg than to move the ankle of the uninjured right leg. In such a case, the patient can do rehabilitation while adjusting the movable range of the injured leg to a range smaller than that of the uninjured leg. Alternatively, the patient can do rehabilitation while adjusting the movable range of the injured leg to a range larger than that of the uninjured leg. Note that although the sliding wheel35is provided as a moving member that moves on the tilt table50in the above description, a moving member other than the sliding wheel35may be used. For example, a slide member that slides on the tilt table50may be used as the moving member. That is, the moving member may slide on the tilt table50rather than rotating thereon. Further, a material having a high friction coefficient may be used for at least one of the inclined surface51and the moving member. That is, a frictional resistance may be given between the inclined surface51and the moving member. In this way, it is possible to increase the load to the foot-pedaling exercise, so that a user can perform an effective exercise. Further, the resistive force by the friction may be a directional resistive force. For example, the resistive force to the forward movement of the moving member may be different from the resistive force to the rearward movement thereof. In this way, it is possible to adjust the load to the foot-pedaling exercise more finely. Second Embodiment An exercise apparatus100according to another embodiment will be described with reference toFIG.12.FIG.12is an XY-plane view schematically showing a configuration of the main part of the exercise apparatus100. In this embodiment, an adjustment member38is added. The configuration other than the adjustment member38is similar to that of the first embodiment, and therefore the description thereof will be omitted. The adjustment member38is disposed between the pedal31and the link30. The adjustment member38is a wedge-like member. The wedge angle α of the adjustment member38is, for example, 25°. By inserting the adjustment member38between the pedal31and the link30, the pedal31can be inclined in the dorsiflexion direction. Since the ankle-joint angle changes according to the angle of the disposition of the pedal31, the ankle joint can be inclined in the dorsiflexion direction at an angle larger than that in the first embodiment. Further, it is possible to adjust the ankle-joint angle by preparing a plurality of adjustment members38having different angles. An assistant or the like may replace (i.e., select) the adjustment member38according to the user U. For example, the assistant or the like can further incline the ankle joint in the dorsiflexion direction by replacing the adjustment member38with one having a larger wedge angle α. Needless to say, the adjustment member38may be disposed so that the ankle-joint angle is inclined in the plantar-flexion direction. For example, the wedge-like adjustment member38may be inserted in the opposite direction. Further, the shape of the adjustment member38is not limited to the wedge-like shape. That is, the adjustment member38may have various shapes. Here,FIGS.13to18show results of simulations that are performed under the condition that the inclination angle of the tilt table50is set to 24.5° and the wedge angle α is set to 25°.FIGS.13and14show results in cases where the sliding wheel35moves on the horizontal floor surface, i.e., where the distance L is larger than Lmax.FIGS.14and16show results in cases where the sliding wheel35moves on the tilt table50in a part of the range of crank angles, i.e., where the distance L is neither smaller than Lmin nor larger than Lmax.FIGS.17and18show results in cases where the sliding wheel35moves on the tilt table50at all times, i.e., where the distance L is smaller than Lmin. Each ofFIGS.13,15, and17is a graph showing changes in the hip-joint angle, the knee-joint angle, the ankle-joint angle, and the angle of the pedal31. In each ofFIGS.13,15and17, the horizontal axis indicates the crank angle. Each ofFIGS.14,16and18shows a trajectory of a representative point of the step (the pedal31) on the XY-plane. Note that results of simulations that were performed under the condition that Lmax=425.5 mm and Lmin=259.8 mm are shown. InFIGS.13and14, results of simulations in which L=450 mm are shown. InFIGS.15and16, results of simulations in which L=350 mm are shown. InFIGS.17and18, results of simulations in which L=250 mm are shown. As shown inFIGS.14,16, and18, the positions of the representative points of the pedal31are changed as compared to those in the first embodiment. Therefore, it is possible to incline the ankle-joint angle in the dorsiflexion direction at an angle larger than that in the first embodiment. As described above, by providing the adjustment member38, a user can perform an exercise at an appropriate ankle-joint angle(s). From the disclosure thus described, it will be obvious that the embodiments of the disclosure may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure, and all such modifications as would be obvious to one skilled in the art are intended for inclusion within the scope of the following claims.
24,728
11857834
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT The present invention will now be described more specifically with reference to the following embodiments. It is to be noted that the following descriptions of preferred embodiments of this invention are presented herein for purpose of illustration and description only; it is not intended to be exhaustive or to be limited to the precise form disclosed. FIG.2Aschematically illustrates a multi-directional workout wheel device. The workout wheel device includes a main body consisting of a base frame20and a top cover21, and a gliding mechanism consisting of a plurality of multi-directional rotating members22. Preferably but unnecessarily, the contour of the top cover21conforms to that of the base frame20, and the main body is symmetrically shaped. Meanwhile, the multi-directional rotating members22are disposed between the base frame20and the top cover21and distributed evenly in order to keep the main body balanced and stable. In this embodiment, a plurality of multi-directional rotating members22are installed substantially at corners of the main body. Alternatively, the multi-directional rotating members may be installed at proper positions other than corners. It is also feasible to install a single multi-directional rotating member, for example, at the center of the main body. Under this circumstance, additional elements known to those skilled in the art may be optionally included to balance or stabilize the device. Furthermore, each of the multi-directional rotating members may be implemented with a ball, as shown inFIG.2A, or a set of balls (not shown). In the embodiment shown inFIG.2A, the main body is of a triangle-like shape, and three balls are allocated at the three corners of the main body, respectively. It is understood that other rotatable structures may also be used instead of balls to serve as the multi-directional rotating members as long as they are capable of rotating in multiple directions under control of the user. In this embodiment, the base frame20, the top cover21and the balls22may be made of high strength engineering plastics with good mechanical properties. For example, the base frame20and the top cover21are made of polypropylene (PP), and the relatively smooth balls22are made of polyethylene (PE). FIGS.2B and2Cschematically illustrate the top and the bottom of the base frame20, respectively. As shown inFIG.2B, the base frame20includes a chassis201and three through holes202evenly distributed at three corners of the chassis201for accommodating three balls22. The size of each the through hole202should be large enough for the ball22to freely rotate therein while being small enough to block the ball22from escaping therefrom. For example, when the through holes202are circular, a diameter of each of the through holes202is slightly smaller than a diameter of the corresponding ball22accommodated therein. In the specifically exemplified embodiment as shown inFIG.2A, each of the through holes202is confined with a tapered wall2020. That is, the surrounding wall2020is tapered so that the through hole202has a larger top opening2021and a smaller bottom opening2022, and the diameter of the bottom opening2022is slightly smaller than that of the corresponding ball22accommodated therein. In this way, the ball22can freely rotate in the through hole202without escaping from the base frame20. In this embodiment, the through holes202are shaped as partially cropped and reversed cones. Alternatively, the through holes202may be shaped as partially cropped and reversed pyramids such as triangular pyramids, quadrangular pyramids, pentagonal pyramids, hexagonal pyramids, etc. In the above-described embodiment, it is desirable that the diameter of each of the through holes202is slightly smaller than the diameter of the corresponding ball22accommodated therein in order to retain the ball22in the through hole202. Nevertheless, it is also feasible to have the diameter of the through hole202greater than the diameter of the corresponding ball22as long as a confining mechanism is provided in the through hole202to retain the ball22in the through hole202. The confining mechanism, for example, may be implemented with several flexible members protruding inwards from the wall of the through hole202or an annular brush installed on the wall of the through hole202. The annular brush is additionally beneficial to sweep the rotating ball22. Furthermore, the chassis201includes a plurality of supporting ribs2010, which are integrally formed with a base plate2000of the chassis201and properly distributed to strengthen the chassis201. The chassis201further includes a plurality of coupling members2011, which are also integrally formed with the base plate2000, for combination with corresponding coupling members2111of the top cover21. The coupling members2011and2111, for example, may be threaded holes, which are connected together with screws (not shown). Please refer toFIGS.2D,2E and2F, which schematically illustrate top and bottom configurations of the top cover21assembled to the base frame20. The top cover21includes a cover body211and a plurality of recesses212. The recesses212are formed on the cover body211at the top side to facilitate enhancement of the flexural strength of the top cover21. Furthermore, when in use, the user holds the cover body211with his palm, and meanwhile, puts his fingers into the recesses212(seeFIG.2E), so that the workout wheel device can be firmly grabbed and stably operated. In addition, as shown inFIG.2F, the cover body211includes a plurality of supporting ribs2110, which are integrally formed with a base plate2100of the cover body211at the bottom side and properly distributed to strengthen the cover body211. The cover body211further includes the plurality of coupling members2111for combination with corresponding coupling members2011of the base frame20, as mentioned above, and three dome structures2112aligned with the through holes202when the top cover21is assembled to the base frame20. In the space between one of the dome structures2112and a corresponding one of the through holes202, one of the three balls22is accommodated and confined. For clearly showing the dome structures2112, the base frame20is flipped over inFIG.3A. In this embodiment, each of the dome structures2112is implemented with a set of curved ribs21120, which are also integrally formed with the base plate2100at the bottom side, radially extend from a common topmost center, and has a size adapted to receive one of the balls22therein, as shown inFIG.3B. In the operational state, the balls22partially protrude from corresponding through holes202, as shown inFIG.3C, to be in contact with and rotatable on the working plane or slope.FIG.3Cfurther illustrates the coupling members2011to be combined with the coupling members2111by way of screws300. It is to be noted that the use of screws and threaded holes as the coupling means is an example given for illustration only. Any other suitable coupling means such as bolts or tenons may also be used. Furthermore, the positions and amounts of the coupling members2011and2111may vary with practical conditions as long as they can be firmly assembled. In the example shown inFIG.3C, three sets of coupling means, each consisting of two pairs of coupling members2011and2111, are used for assembling the base frame20and the top cover21. Alternatively,FIG.3Dexemplifies a variation of the coupling means, in which three pairs of coupling members2011and2111are disposed near the three vertices, respectively. When the workout wheel device is placed onto the working plane or slope, the balls22partially drop off from the bottom openings2022of the through holes202. Afterwards, when the user holds the workout wheel device with his palm and pushes the top cover21downwards, the bottom surfaces211200of the curved ribs21120are in contact with the outer surfaces of the balls22so as to cause friction between the curved ribs21120and the balls22. The user thus needs to exert a force beyond the frictional force to move the wheel device forwards or backwards, thereby achieving the purpose of exercise. Subsequently, by manually changing the direction and/or magnitude of the force exerted onto the top cover21with the user's hand, the level of the friction between the curved ribs21120and the balls22, as well as the balls22and the working plane or slope, can be changed. In addition, the frictions occurring at different ball-dome pairs can also be locally adjusted by changing the point of force exerted onto the top cover21with the user's hand. For example, by exerting a proper magnitude of backwards and downwards force onto the outmost ball-dome pair, the ball22can be stopped from rotation without additional braking means. For achieving the above-described objects, it is desirable that the workout wheel device is made of a proper material so that the friction between the dome structures and the balls does not hinder the rotation of the balls22when the pressing force is substantially evenly exerted onto the three ball-dome pairs. On the other hand, the friction occurring in a specified one of the three ball-dome pairs can be locally increased if the user adjusts the force vector to push harder against the specified ball-dome pair so as to stop the rotation of the corresponding ball22, and meanwhile, stop the gliding of the entire device. According to a further aspect of the present invention, a workout wheel kit, which includes a base frame20, a set of balls22, and a plurality of replaceable top covers21, each having a conformable size to the size of the base frame20, can be provided. The plurality of top covers21have similar structures, but the materials, amounts and/or total area of the curved ribs211200or the integral top covers21are different. Different materials result in different frictions relative to the balls22. Depending on the required level of friction, a proper one of the top covers21is selected to be assembled to the base frame20. Alternatively, a workout wheel kit, which includes a base frame20, a top cover21, and plural sets of balls22, can be provided according to the present invention. The plural sets of balls are made of different materials so as to have different frictions relative to the curved ribs211200of the top cover21. Depending on the required level of friction, a proper one of the plural sets of balls22is selectively used between the base frame20and the top cover21. Generally, two workout wheel devices41and42are used at the same time, as shown inFIG.4. That is, the user stays in kneeling or prone position and holds and manipulates the two workout wheel devices41and42with his two hands, respectively. Since the workout wheel device according to the present invention can be moved in multiple directions due to the free rotation properties of the balls, diverse body parts or muscles can be selectively trained. By the way, for storage, the two workout wheel devices41and42can be stacked, as illustrated inFIG.5, and a soft anti-skid matrix43may be placed between the two workout wheel devices41and42to keep the stack stable. While the invention has been described in terms of what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention needs not be limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and similar arrangements included within the spirit and scope of the appended claims which are to be accorded with the broadest interpretation so as to encompass all such modifications and similar structures.
11,677
11857835
DETAILED DESCRIPTION OF THE EMBODIMENTS Detailed embodiments of facial and neck exercising and stimulating device are disclosed herein; however, it is to be understood that the disclosed embodiments are merely illustrative of the facial and neck exercising and stimulating device that may be embodied in various forms. In addition, each of the examples given in connection with the various embodiments of the facial and neck exercising and stimulating device are intended to be illustrative, and not restrictive. Further, the drawings and photographs are not necessarily to scale, and some features may be exaggerated to show details of particular components. In addition, any measurements, specifications and the like shown in the figures are intended to be illustrative, and not restrictive. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the Device. With reference toFIGS.2-4, an embodiment of a facial and neck exercising and stimulating device is illustrated. The device1includes a pair of modiolus holders2, a pair of handles3, and two bars4. According to embodiments, the handles3of device1are shaped so as to provide easy gripping by a user's hands. Bars4may be permanently or releasably connected to each other in a rotatable fashion via rotatable connection7. The connection of the bars4may generally form an X shape, as illustrated byFIGS.2-4. However, the bars4may form other shapes, such as parallel lines (seeFIG.9B), an H shape, or a W shape, so long as the shape they form allows for the separation of the modiolus holders. The bars4of device1may be symmetrical to each other relative to the longitudinal axis of the device1. The handles3and the modiolus holders2may similarly be symmetrical to each other relative to the longitudinal axis of the device1. As further illustrated byFIGS.2-4, each of the bars4has a proximal and distal end. The proximal end of each bar4is connected to or otherwise secured with a modiolus holder2, and the distal end of each bar4is connected to or otherwise secured with a handle3. With reference toFIG.3, the portion of each bar4positioned between the proximal end and rotatable connection7is referred to as the upper arm8. Each upper arm8has an upper arm length L1. The portion of each bar4positioned between the distal end and the rotatable connection7is referred to as the lower arm9. Each lower arm9has a length L2. The net force (e.g., pulling or pushing force) F2exerted on lower arms9(e.g., by pulling handles3apart) produces a net force F1at modiolus holders2. The aforementioned lengths and forces are related to each other in accordance with the following formula: L1/L2=F2/F1. Accordingly, by varying lengths L1and L2, the L1/L2ratio can be modified which allows for a variation in proportionally of forces F2and F1. Thus, by modifying the lengths the device is configured to provide varying levels of force to the modiolus holders2. According to an illustrative embodiment of the invention, modiolus holders2are shaped and configured to secure and engage with the modiolus areas of the face of the user in a safe manner. According to one embodiment, the modiolus holders2have an anatomical shape with an exterior surface resembling a mitten (See, e.g.,FIG.5). Such a shape evenly distributes the force applied to the modiolus areas while also minimizing the risk of the adjacent soft tissues impingement, mitigating the risk of irritation of the engaged skin, and preventing damage to the oral mucosa. With reference toFIG.5, the portion of the modiolus holders2, when in use, that are positioned outside of the user's mouth and in close contact with the skin are referred to as the thumb10. The portion of the modiolus holders2placed inside of the user's mouth and in contact with the oral mucosa of the mouth are referred to as the hand11. The thumb10and hand11connect to one another at the location of base12. An inferior space, in the form of an anterior-posterior groove13, is created between the hand11and the thumb10, which accommodates the corner of a mouth. Groove13is usually deeper and projects more downwardly in the posterior direction. An angle formed between the center line of the device22(as shown inFIG.2) and the center26of groove14is between approximately 10° and 45°. Both the thumb10and the hand11extend from the front to the back side of the device1forming a shell-like shape, and the exact dimension of the thumb10and the hand11may be sized according to the individual user's mouth. According to an alternative embodiment, the dimensions of the modiolus holder2, and thus the thumb10and hand11, may be standardized (e.g., according to average anatomical dimensions) and come in a variety of sizes. According to a preferred embodiment, the hand11is longer and has a larger surface area than the thumb10. With reference toFIG.6, the hand11and the thumb10are each provided with a plurality of surfaces. The surface of the thumb10that engages with the skin and the surface of the hand that engages with oral mucosa (i.e., the inside of the mouth) are referred to as the inward surfaces17. The surfaces of the thumb10and the hand11that oppose the inward surfaces17are referred to as the outer surfaces18. The thumb10and hand11each further include curved side surfaces19that connect the inward surfaces17with the outer surfaces18. A part of the outer surface that is farthest from the geometrical center of the rotatable connection7of the bars4is referred to as the top surface20. The outer surfaces of the hand11and the thumb10are convexly shaped, and the inward surfaces of the hand11and the thumb10are concavely shaped, which generally attenuates at top surface20. The concavity of the inward surface of the hand11is greater that that of the inward surface of the thumb10so as to provide a better anatomical engagement between the modiolus holders2and a modiolus areas of the face. A protuberance21, which may vary in size, may be placed on the inward surface17on the hand11for better retention of a modiolus. According to the preferred embodiment, the surface of the modiolus holders2is covered with or made from a biocompatible material. Such biocompatible materials include, for example, high molecular polyethylene, biocompatible polished ceramics, pure titanium or biocompatible titanium alloys. Alternatively, the modiolus holder can be made of a non-biocompatible material or partially biocompatible material and covered with a disposable cap or film made of a biocompatible material. For example, if the device is to be used by a single individual, there is no concern for cross contamination between users, and thus, the need for a disposable cap or film is unnecessary. However, when the device is to be used by multiple users, the disposable cap or film provides a means to prevent any cross contamination between users. Additionally, the surface of the modiolus holders2or parts thereof may be covered or otherwise coated with an antimicrobial material/layer in order to prevent skin or mucosal bacteria from growing. According to a further embodiment of the invention, and with reference toFIG.7, the device may include a user configurable stop system23, which allows a user to select a maximum allowed separation of handles2, and thus separation of modiolus holders2. The device can further include a spring system24configured to help maintain this separation and provide a degree of flexibility for handles2move relative to one another, allowing for a more physiological and organic functioning of the device. Although spring system24is illustrated as including a spring for, the spring system may employ alternative elements (e.g., a pair of opposing polarity magnets). It should be noted that stop system23may not limit a user's capacity to manually adjust the force applied by the device to the modiolus area of the face. According to certain embodiments of the invention, the device1may further include a force gauge (not shown) that measures the force applied by the device1. Such measurements may be monitored in real time in order track use of the device1and to better aid in long term exercise regimes. Additionally, the force gauge may be configured to transmit, either wirelessly or via a wire, to a remote device, such as a user's smartphone, laptop, or smartwatch in real time. According to a further embodiment of the invention, and with reference toFIG.8, the device may include a slide mechanism31that extends and retracts, reflecting the contractions that the muscles are exerting. The device can further include a fixation element32to which weights, bands, tubes, etc. may be attached to aid in providing the necessary pulling force. Methods of using the device1include placing the modiolus holders2at locations on the face and within the mouth such that the hand11and thumb10of the modiolus holders2contact and grasp the modiolus areas of the face. The handles3are then moved away from each other such that the angle at rotatable connection7between bars4is increased, which increases the distance between the modiolus holders2. With these muscles secured, the device1can be angled upwardly or downwardly to target muscles of the upper, central or lower face, and pulled away from the face, generating a force, by the device, to stretch these muscles beyond their regular capacity. The user then resists the pulling force by contracting their facial and neck muscles, thereby stimulating these muscles to contract beyond their regular capacity. By repeating these steps the user is able to exercise the muscles of the face and neck, thereby strengthening them and counteracting sarcopenia According to further embodiments, as illustrated byFIG.9A, the device may be a part of an exercising system40. The system includes horizontal bars41and vertical bars42that are slideable relative to one another. The device is configured to be attached to the vertical bars41, as illustrated byFIG.9A, via attachment means43, such as a ball joint socket. A vertical stop49in conjunction with slide adaptor50can be employed to adjust and lock the height of the device. As described in greater detail above with reference toFIGS.2-6, the device includes a pair of modiolus holders44, and two bars, each having a distal mechanism45. The bars may include hand rests46. A stop system47is attached to the device, and act as a stop as described above with regard toFIG.7. Additionally, in order for multiple users to utilize exercising system40, modiolus holders44may be detachable, and attach to the device via attachment means48. According to an alternative embodiment, as illustrated byFIG.9B, exercising system40′ includes a device having two parallel bars (as opposed to an X shape as depicted inFIG.9A). In order to operate the exercising system40,40′, and as illustrated inFIGS.9A and9B, the user will adjust the height of the device by placing the modiolus holders44at a height corresponding to the corners of the mouth by adjusting the height of the device via sliding the device along the vertical bars42utilizing a sliding adaptor50and locking it at the chosen height using vertical stop49. Then the modiolus holders44are introduced at the user's mouth, and stop system47will guide the bars of the device, moving them horizontally so as to separate the modiolus holders44and lock them into an ideal position to secure the modiolus areas of the face within respective modiolus holders44. While all this action is taking place, the device is in a neutral position in regard to the forces necessary for the exercise to take place. Once the modiolus holders44are secured, the device is then angulated. For example, the device can be angled up or down via attachment means43and the angle can be locked in and stabilized via stop mechanism53to keep the proper angle for a chosen exercise, as illustrated inFIG.9D. According to one preferred embodiment, joint head56can angulate about the socket until a preferred angle is achieved, and stop mechanism53is used to lock in the angle. Once the angled is locked in, pulling forces are then applied to the device by the use of, for example, cables and weights51,52,54, or the like. The application of the pulling forces may be done by incremental augmentation until the proper tension is set and locked by the amount of weight used, or by other means of maintaining the proper tension of the device. After the pulling force is set, the user may start to contract the muscles that have been activated by the device, thereby exercising their face and neck muscles. As discussed above, in order to provide the necessary pulling force to aid in the exercise their face and neck muscles, a weight51is attached to the device, via, for example, a cable52that runs through distal mechanism45. A stop mechanism53can be implemented, in conjunction with a distal arm spring54, to provide a means to monitor the applied force and to prevent excessive force from being applied. The exercise system40may further include a safety mechanism (not shown) to immediately deactivated the system in case the user is not properly positioned to use the device. According to alternative embodiments, an exercise system60may be mounted on a single horizontal bar, as illustrated byFIG.10. According to these embodiments, the use of an adjustable chair or other vertical orienting means will be implemented. For example, instead of vertically orienting the device in relation to the user, the chair can be used to set the proper height for the use of the device. The distance of the chair to the equipment may also be adjusted and security strips may be also added to the chair. As further illustrated byFIG.10, exercise system60may be a fully encased system, where only the bars and modiolus holders are exposed. For example, stop system47, attachment means43, may be placed within a secured housing such that they are not exposed to the user. According to a variation thereof, the housing may encase a substantial portion of the bars, having slots at its front end such that the bars can move away from one another. In such embodiments, controls may be conveniently located next to the user's hands. According to certain variations of the embodiments discussed above, the modiolus holders may be configured to angulate about the bars, via attachment means58. This will facilitate adjustment of modiolus holders, allowing the user to better customize the device to their anatomy so that the modiolus areas are better secured by the device. Additionally, pieces of the device may be modular. For example, the bars, modiolus holders, and handles may be modular components configured to attach to one other to form the device. Such a configuration allows for the device to be broken down and easily carried. With reference toFIG.11, an alternative to spreading the crossed bars as well as securing the pair of modiolus holders44is illustrated. According to an embodiment, by using cables141, springs142, and a tension control knob143, a user is able to calibrate a desired tension for an exercise. The springs142can be replaced with stronger ones as the user progresses with the exercises. The figure also depicts modified handles144, in which each of the handles144includes a respective hand-sized hole. The figure also depicts respective sets of stacked weights145, which can be hung on each holder44for extra resistance. The weights145can be replaced with heavier ones if desired. Further, the device can also include a pair of wrist straps146for safety, which can be attached, e.g., via an eyelet, to the handles144. FIG.12illustrates the device ofFIG.11with a curved expansion bar formed by two sliding half bars as well as a centered locking mechanism147, which can be used to set the proper opening of the device as desired by the user. The device can also include a bilateral-located resistance mechanism148at each end of the curved expansion bar that, when rotated, controls the amount of resistance desired by the user. FIG.13illustrates the resistance mechanism148in more detail. According to an embodiment, marks on a turning knob can guide the user to augment the resistance as desired. Further, the resistance mechanism can be replaced if/when the forces placed by the user's muscles increase with the device's use. FIG.14illustrates an alternative embodiment of the exercising system40. As depicted in the figure, for exercise system150, the exemplary device can be mounted on a horizontal bar151, which sits on a vertical stand/support that can house other components used for the function of the device. According to an embodiment, the horizontal bar151can include an expansion system of the device, which allows the user to set the distance necessary to maintain the pair of modiolus holders44engaged at the corners of the mouth. The system150can also include a locking mechanism152to secure the pair of modiolus holders44to the corners of the mouth by sliding two overlapping half bars that are pressured on their distal end against a spring mechanism attached to a junction box154. The system150can also include condylar junctions155, which can be connected to each distal end of the device. In particular, the condylar junctions155are located on the entrance of each junction box154, at each end of the expansion mechanism. These joints can be seen in greater detail inFIG.15. Further, the joints can be CV joints or any other type of joint that may be either connected to a cable or some other type of mechanism (so long as it allows for the joint's movement and the application of measurable resistance). The system150can also include device handles156, which allow the user to hold the device, e.g., for support and movement guidance. Further, the system can also include a control panel157(electronic and/or mechanical) which allows the user to adjust the expansion of the device (e.g., to have the modiolus holders44adjusted to the modiolus area), to control the resistance necessary for each user, and to also control the angles necessary for the performance of the exercise. According to an embodiment, the control mechanism may also be placed on or in the vicinity of the handles156to facilitate the control of the device by the user. As such, the exemplary device may present two sets of controls, e.g., one for the user and another for a possible trainer. The system150can also include a chair158. According to an embodiment, the chair158can be centered in front of the equipment and is linked to it by rails or any other guiding system. Further, the chair158can allow for the proper distance between the user and the equipment to be set as desired. It can also be moved up and down to adjust to the patient's height as well as the exercise being performed. Further, the chair158can include straps (not shown) to keep the user's head and body in the ideal position for the exercises. Further, the system150can include an enclosure that keeps the exemplary device protected/encased while it is not in use. According to an embodiment, the system150can perform the same functions as the system40as well as the device1but with mechanisms that can substitute the work of the arms and hands of the user. For example, spring systems, gears, CV joints, condylar joints, weights, rails, cables and tension control systems can be used to provide resistance for the use of the device as well as mimic the work done by the arms and hands of the user. Further, the resistance system chosen can be controlled electronically or manually. The user may be able to access all necessary control on a front panel. Further, computer and other electronics can be applied to the equipment to help maximize its efficiency and performance. For example, robotic arms and hands can be used to perform the above. FIG.15illustrates sagittal and front views, respectively, of the junction box154. As depicted in the figure, each junction box154includes sets of cables160. According to an embodiment, one set of cables160is located above a rolling mechanism161that runs on rails162, while the other set of cables160is located below the rolling mechanism161. The movement of the cables160permits the device to expand, allowing the modiolus holders44to secure both corners of the mouth. Further, the rolling mechanism161allows for the cable160connected a CV joint166(or any other type of joint) to move when pulling forces are applied to the device. On its distal end, it is attached to springs and/or weights, and at its mesial end, it is connected to the CV joint166. According to an embodiment, the CV joints166help the device to be angled, mimicking the movement of the hands and arms of the user. Further, as the cables160exit each junction box154though the back, they are then joined together in connection163via a spring164. After the spring connection, the cables160can be run through a pulley165that can be connected to weights or any other means of resistance. According to an embodiment, all the forces applied to the function of the equipment can be connected to a control panel that can be used by the user or the potential trainer. It can also function mechanically or electronically. A display panel to monitor the status of the user's performance may also be placed connected to, or otherwise incorporated into any of the aforementioned exercise systems. Further, according to an embodiment, at least one pressure sensor can be included in any of the modiolus holders described above. The pressure sensors can measure the amount of pressure being applied by the modiolus holders on the modiolus muscle of the user as well as the pressure being applied by the modiolus muscles on the modiolus holders. According to an embodiment, the results of the pressure sensors as well as other performance metrics associated with the device can be transmitted to the display panel via a communication network. The communications network can be comprised of, or may interface to any one or more of, for example, the Internet, an intranet, a Local Area Network (LAN), a Wide Area Network (WAN), a Metropolitan Area Network (MAN), a storage area network (SAN), a frame relay connection, an Advanced Intelligent Network (AIN) connection, a synchronous optical network (SONET) connection, a digital T1, T3, E1 or E3 line, a Digital Data Service (DDS) connection, a Digital Subscriber Line (DSL) connection, an Ethernet connection, an Integrated Services Digital Network (ISDN) line, a dial-up port such as a V.90, a V.34 or a V.34bis analog modem connection, a cable modem, an Asynchronous Transfer Mode (ATM) connection, a Fiber Distributed Data Interface (FDDI) connection, a Copper Distributed Data Interface (CDDI) connection, or an optical/DWDM network. The communications network can also comprise, include or interface to any one or more of a Wireless Application Protocol (W AP) link, a Wi-Fi link, a microwave link, a General Packet Radio Service (GPRS) link, a Global System for Mobile Communication (GSM) link, a Code Division Multiple Access (CDMA) link or a Time Division Multiple Access (TDMA) link such as a cellular phone channel, a GPS link, a cellular digital packet data (CDPD) link, a Research in Motion, Limited (RIM) duplex paging type device, a Bluetooth radio link, or an IEEE 802.11-based radio frequency link. Communications networks can further comprise, include or interface to any one or more of an RS-232 serial connection, an IEEE-1394 (Firewire) connection, a Fibre Channel connection, an infrared (IrDA) port, a Small Computer Systems Interface (SCSI) connection, a Universal Serial Bus (USB) connection or another wired or wireless, digital or analog interface or connection. As is evident from the above disclosure, the present invention has applications in an array of fields including, but not limited to, dentistry, plastic surgery, general exercise, physical therapy, and physical education. The above-described device and methods of implementing the device are meant to be illustrative, and alternative devices and methods are within the scope of this disclosure. For example, device1may be used in other fields besides facial and neck stimulation, for example in gynecology or proctology.
24,329
11857836
DETAILED DESCRIPTION A general description of aspects of the invention followed by a more detailed description of specific examples of the invention follows. A. General Description of Various Aspects of the Invention 1. Individual Training, Coaching, and/or Equipment Fitting Aspects and Features At least some aspects of this invention relate to golf swing evaluation systems and methods for providing individual training, coaching, and/or equipment fitting information to a user. Golf swing evaluation systems and methods according to at least some examples of this invention may include one or more of the following: (a) a first sensor system for detecting golf swing dynamics information generated by a first user during one or more golf swings; (b) a second sensor system for detecting ball flight information when one or more golf balls are hit by the first user; (c) a transmission system for transmitting data to a swing analysis system (optionally at a location remote from the first user), the data transmitted by the transmission system corresponding to or being derived from the information collected by the first and second sensor systems; and (d) an output system for providing coaching, training, and/or equipment fitting information for the first user, wherein at least some of this information provided by the output system is generated by the swing analysis system or derived from data generated by the swing analysis system. Additionally, a memory may be provided for receiving data generated by the first and second sensor systems and storing the data before transmission to the swing analysis system. Optionally, if desired, a separate swing analysis system may be eliminated and/or at least some of the data processing involved in systems and methods according to examples of this invention may take place on board the equipment used in sensing the golf swing dynamics information and/or the ball flight information (e.g., in data processing systems (e.g., microprocessors) provided with any shoe based sensor(s), club based sensor(s), user carried sensor(s), apparel based sensor(s), glove based sensor(s), ball flight monitor sensor(s), etc.). Thus, at least some local data processing is possible before and/or without sending data to a separate swing analysis system. Such systems and methods further may include one or more alignment systems, e.g., for providing information to the first user regarding: (a) a preferred or target golf ball flight direction, (b) a golf ball start or tee location with respect to a location at least some portion of the first or second sensor systems, (c) a stance set up location with respect to a ball launch location (which may depend, at least in part, on a golf club being used by the first user for that individual swing, some aspect of the user's size, etc.), and/or (d) proper alignment or positioning of at least one of the first sensor system or the second sensor system with respect to at least one of a first user stance position or a golf ball start location. At least some portions of these alignment systems may be stationary (e.g., fixed in a driving range/golf ball hitting bay) or portable (e.g., carried by the golfer or a caddie, worn by the golfer or a caddie, carried on a golf cart, carried on a golf bag, etc.). The alignment system(s) may include any devices or methods to assist in alignment such as: at least one light generating device that projects light to provide the alignment information (e.g., at the surface on which the user stands); at least one laser generating device that projects a laser beam to provide the alignment information (e.g., at the surface on which the user stands); a series of lights visible at a surface on which the user stands when hitting golf balls; a grid system on a surface on which the user stands when hitting balls; one or more lines visible at a surface on which the user stands when hitting golf balls (e.g., permanently provided on or projected onto the surface on which the user stands); etc. The sensor system for detecting golf swing dynamics information generated by a user during one or more golf swings may determine any desired parameter(s) without departing from this invention, including one or more of: foot force exertion or foot pressure by one foot (at one or multiple locations of the foot, optionally throughout the golf swing); foot force exertion or foot pressure by both feet (at one or multiple locations of each foot, optionally throughout the golf swing); weight shift or center of gravity location information; center of pressure information on one or both feet (e.g., a ratio of weight on the two feet, etc.); golf club position information; golf club speed or velocity information (optionally, at least at and around ball impact); golf club acceleration information; golf club movement path direction information (optionally, at least at and around ball impact); golfer hand position, speed, acceleration, or movement path information; golfer shoulder or torso position, speed, acceleration, or movement path information; golf grip pressure and/or pressure change information (due to hands gripping the club, optionally for one or both hands); etc. Any types of detectors or sensors may be used without departing from this invention, such as accelerometers, motion detectors, infrared detectors, pressure or force sensors, gyrometers, magnetometers, etc. Also, this sensor system may include one or more video cameras arranged to record the golf swing, e.g., from behind the golfer, from a “face on” location with respect to the golfer, from overhead, etc., and/or to record the club head/ball contact. At least some data generated by the various golf swing dynamics sensor systems (and even all of the generated data) may be time stamped (e.g., to allow comparisons over time and/or to allow correlation with other collected data, such as the ball flight information for that same swing). In particular, in at least some example systems and methods according to this invention, foot force data, club/swing data, and/or body based sensor data will be time stamped and/or otherwise stored so as to allow correlation of the collected data with respect to time. The sensor system for detecting ball flight information also may determine any desired parameter(s) without departing from this invention. Examples of the detected or determined parameters may include, for example, any one or more parameters determined by golf ball launch monitoring systems, such as initial ball launch angle, initial ball launch speed, initial ball launch spin (e.g., absolute spin (e.g., in RPMs) and/or spin direction), initial ball launch direction, projected or actual ball carry distance, projected or actual ball roll distance, projected or actual ball travel distance, projected or actual ball apex height, projected or actual ball apex location distance, projected or actual ball to ground impact angle, golf club head speed at a ball contact time, “smash factor” (e.g., ratio of ball launch velocity to club head velocity at impact with the ball), golf club head movement path direction at a ball contact time, projected or actual ball flight deviation from center (or from a predefined path), golf ball flight curvature, etc. Golf ball launch monitoring systems that may be used for at least some example aspects of this invention are conventionally known in the art. At least some data generated by the various ball flight sensor systems (and even all of the generated data) may be time stamped (e.g., using a common clock with that used for the golf swing dynamics sensor system(s) mentioned above) to allow time correlation of the collected data. If desired, at least one of the golf swing dynamics sensor system or the ball flight sensor system may audio record a sound generated when the golf ball is struck. This data may be useful to a human swing analyzer and/or a computer based swing analysis system to provide feedback on the quality of the golf club head/ball contact (e.g., to enable a determination of whether the ball was hit after the ground surface was hit, the squareness of the hit, the face location of the hit on the club head, etc.). Any desired type of data transmission system and method may be used without departing from this invention, including wired or wireless transmission, optionally over a networked connection (such as the Internet). Data transmission capabilities may be provided in any desired hardware associated with the analysis systems and methods, including, for example: in one or both shoes worn by the user when hitting golf balls; engaged with a surface on which the user stands when hitting golf balls (e.g., in a driving range bay); in a golf club used for hitting golf balls; in an article of apparel worn by the user when hitting golf balls; as part of a golf ball hitting bay; engaged with a golf cart (a self-propelled or user propelled golf cart); engaged with a golf bag; provided with a portable electronic device (such as a cellular telephone, a PDA, a GPS device, etc.); provided with a personal computer; etc. Systems and methods according to the invention may provide output in any desired manner without departing from this invention. As some more specific examples, the output systems may include a display screen or other output device(s) (such as a television, computer monitor, cellular telephone, portable electronic device, etc.) for displaying audio, video, and/or a textual information; a tactile sensation creating device (such as electrodes, sharpened elements, vibratory elements, etc.), e.g., to change the tactile sensations experienced by the user during the course of a golf swing and/or to cause a reflexive action by the golfer during the course of a golf swing; a tempo providing device (such as a metronome or other patterned audio information); etc. The output may include any desired coaching or training information (made available to the player and/or his/her coach), such as swing tips; swing advice; training drills; swing demonstrations by a third party; comparisons of the user's swing with “standard information” (such as a comparison with swing or club positioning of another player, comparison with the swing tempo of another player, etc.); comparisons of the user's swing against his/her swing at a different time (e.g., before lessons were undertaken, to show improvement, to show reversion to old habits or form, etc.); etc. Additionally or alternatively, the output may include equipment adjustment, equipment recommendation, and/or equipment fitting information. If desired, at least one set of golf swing data is generated using the golf swing dynamics information and/or the ball flight information generated during a single golf swing by the first user. Also, if desired, at least some portions of the first sensor system, the second sensor system, the transmission system, and/or the output system may be portable so as to allow use during play of a round of golf (e.g., to enable the golfer to receive a “playing lesson” or to record swings during actual play (as opposed to just on the range)). When used for equipment fitting, systems and methods according to at least some examples of this invention may provide information to the user (e.g., the golfer, an equipment fitting professional, a coach, a trainer, another third party, etc.) via the output system that includes: golf club lie angle information (including recommendations for changes to an existing golf club lie angle); golf club face angle information (including recommendations for changes to an existing golf club face angle); golf club loft angle information (including recommendations for changes to an existing golf club loft angle); golf ball selection information (including recommendations to try a different golf ball model); golf club selection information for woods or irons (including recommendations to try a different club head make, model, or other parameter); golf club shaft information (including recommendations for different shaft models, different shaft characteristics (such as flex, kick point location, materials, etc.)); golf club apparel information (including recommendations to try different shoes, traction element patterns, gloves, clothing, etc.); etc. Additional features of this aspect of this invention relate to methods of operating and/or using the golf swing evaluation systems described above to provide individual training, coaching, and/or equipment fitting information (e.g., clubs, balls, shoes, apparel, etc.) to a user (e.g., suggested club lie, loft, and/or face angles; shaft recommendations (e.g., flex, kickpoint, materials, etc.); ball specifications (e.g., make, model, type, hardness, etc.); footwear traction element or spike types and/or patterns; etc.). Such methods may include at least some steps performed by a computer (such as receiving input data, transmitting output data, collecting sensor data, storing data, processing data, generating output, displaying output, etc.). Aspects of this invention also may relate to hardware for performing and steps performed by systems and methods of the invention in a client/server based computer arrangement, and features of the invention may be practiced solely at the client side, solely at the server side, or at both the client and server sides. Still additional aspects of this invention may relate to computer readable media that include computer executable instructions stored thereon for operating the hardware systems and/or performing the methods described above (and described in more detail below). 2. Golf Statistical Community and Hub Aspects and Features At least some aspects of this invention relate to collection of golf data from a plurality of players (a “community” of golfers or players) and providing feedback or other information to individuals within the community based at least in part on the collected information from this community. Such golf community systems and methods may include, for example: (a) an input system for receiving golf statistical data from a community of golfers including a first golfer; (b) a storage system for storing golf statistical data received from the community of golfers; and (c) an output system for transmitting information to the first golfer, wherein the information transmitted to the first golfer via the output system includes: (i) statistical information for the first golfer and (ii) statistical information for at least a first portion of the community of golfers. Such systems and methods also may receive input from and generate output based on information obtained from other sources as well, such as the USGA or other handicap maintenance organizations; one or more golf courses (e.g., scorecard information, daily tee locations, daily pin placements, yardages, hole handicaps, slope, course rating information, etc.); map data; professional (or other player) tips for playing individual holes (e.g., from PGA Tour players or PGA of America instructors); advertisements and other third party information; etc. The golf statistical data obtained from the first golfer via the input system in some example systems and methods allows determination and/or display of one or more of the following: a golf score for an individual hole played during a round of golf; a golf score for a plurality of holes played during a round of golf; a golf score for all holes played during a round of golf; a number of fairways hit from a tee shot during a round of golf; a number of fairways missed left from a tee shot during a round of golf; a number of fairways missed right from a tee shot during a round of golf; a number of fairways missed short from a tee shot during a round of golf; a number of fairways missed long from a tee shot during a round of golf; a number of greens in regulation hit during a round of golf; a number of putts played during a round of golf; an average number of putts played per green hit in regulation during a round of golf; a number of sand saves made during a round of golf; a number of penalty strokes incurred during a round of golf; an overall length of putts made during a round of golf; and a number of times making a score of par or better when missing a green in regulation during a round of golf. Some of this data may be determined automatically, using GPS and/or map data and/or based on sensor input (e.g., a club sensor detecting contact with a ball). As some more specific examples, the various sensors may be relied upon to determine, at least in part, when a player has gone out of bounds, number of fairways hit (or missed) from the tee, number of fairway misses left, number of fairway misses right, number of fairways misses short, number of fairway misses long, number of greens hit (or missed) in regulation, number of greens missed left, number of greens missed right, number of greens missed short, number of greens missed long, number of bunkers hit, percentage of sand saves, percentage of successful “up and downs,” number of putts, number of strokes, number of times using each club, distances of each shot, etc. Optionally, if desired, some of the necessary data or information may be entered into the system manually by the player (e.g., during play) and/or the player may be given an opportunity to override or correct any automatically generated data (e.g., to add penalty strokes, correct erroneously determined data, etc.). As noted above, output systems according to this example aspect of the invention may provide statistical information to the user for at least a portion of the community of golfers. This “portion” of the community of golfers may include any desired number of members that input data to or use the system up to and including all golfers that utilize the golf community system. As some more specific examples, the “portion” of the community for which statistical information is provided to users may include information for golfers within the community having a handicap within a predetermined range of a handicap of the first golfer (e.g., within ±1 point of the user's current handicap) or information for golfers included in a user defined sub-community (e.g., golfers identified as “friends,” golfers that have agreed to share their statistical data with others, golfers participating in a user's group on the course and/or a specified event, etc.). As another example, the “portion” of the community for which statistical information is provided to the user may include another individual golfer. The golf statistical data obtained from and/or transmitted to the first golfer via the input system in some example systems and methods allows determination and/or display of one or more of the following: an average golf score for the first golfer on an individual hole over a plurality of times playing the individual hole; an average golf score per round for the first golfer over a plurality of rounds of golf; an average number of fairways hit from a tee shot per round by the first golfer over a plurality of rounds of golf; an average number of fairways missed left from a tee shot per round by the first golfer over a plurality of rounds of golf; an average number of fairways missed right from a tee shot per round by the first golfer over a plurality of rounds of golf; an average number of fairways missed short from a tee shot per round by the first golfer over a plurality of rounds of golf; an average number of fairways missed long from a tee shot per round by the first golfer over a plurality of rounds of golf; an average number of greens hit in regulation per round by the first golfer over a plurality of rounds of golf; an average number of putts played per round by the first golfer over a plurality of rounds of golf; an average number of putts played per green hit in regulation by the first golfer over a plurality of rounds of golf; an average number of sand saves made per round by the first golfer over a plurality of rounds of golf; an average number of penalty strokes incurred per round by the first golfer over a plurality of rounds of golf; an average number of times making a score of par or better by the first golfer when missing a green in regulation over a plurality of rounds of golf; an average total length of putts made by the first golfer over a plurality of rounds; an average golf score for the first portion of the community of golfers on an individual hole; an average golf score for the first portion of the community of golfers on an individual golf course; an average number of fairways hit from a tee shot per round by the first portion of the community of golfers on an individual golf course; an average number of fairways missed left from a tee shot per round by the first portion of the community of golfers on an individual golf course; an average number of fairways missed right from a tee shot per round by the first portion of the community of golfers on an individual golf course; an average number of fairways missed short from a tee shot per round by the first portion of the community of golfers on an individual golf course; an average number of fairways missed long from a tee shot per round by the first portion of the community of golfers on an individual golf course; an average number of greens hit in regulation per round by the first portion of the community of golfers on an individual golf course; an average number of putts played per round by the first portion of the community of golfers on an individual golf course; an average number of putts played per green hit in regulation by the first portion of the community of golfers on an individual golf course; an average number of sand saves made per round by the first portion of the community of golfers on an individual golf course; an average number of penalty strokes incurred per round by the first portion of the community of golfers on an individual golf course; an average number of times making a score of par or better when missing a green in regulation by the first portion of the community of golfers on an individual golf course; an average length of putts made by the first portion of the community of golfers on an individual golf course; an average golf score for the first portion of the community of golfers for a round of golf; an average number of fairways hit from a tee shot per round by the first portion of the community of golfers; an average number of fairways missed left from a tee shot per round by the first portion of the community of golfers; an average number of fairways missed right from a tee shot per round by the first portion of the community of golfers; an average number of fairways missed short from a tee shot per round by the first portion of the community of golfers; an average number of fairways missed long from a tee shot per round by the first portion of the community of golfers; an average number of greens hit in regulation per round by the first portion of the community of golfers; an average number of putts played per round by the first portion of the community of golfers; an average number of putts played per green hit in regulation by the first portion of the community of golfers; an average number of sand saves made per round by the first portion of the community of golfers; an average number of penalty strokes incurred per round by the first portion of the community of golfers; an average number of times making a score of par or better when missing a green in regulation by the first portion of the community of golfers; and an average length of putts made per round by the first portion of the community of golfers. When these example systems and methods provide data for multiple rounds of golf (either for an individual or for some portion of the community), the plurality of rounds of golf may have occurred on a single golf course (optionally from the same set of tees on that golf course) or on multiple golf courses. If desired, when this type of data is compiled for multiple golf courses, the compiled data may be limited to courses having similar lengths (for the tees used by the golfers during the rounds) or other similarities in the degree of difficulty (e.g., similar slope ratings, similar other ratings, etc.). As additional examples, the plurality of rounds used for providing at least some of the displayed information may include all rounds by one or more of the golfers or may be limited to a subset of the rounds, such as: the rounds utilized in determining a handicap of the golfer; a predetermined number of most recently played rounds; all rounds played since completion of some course renovation; all rounds played since a specified date; all rounds played since a specific equipment change, etc. The hardware or equipment used for operating the above described community systems may be present predominantly or exclusively on a server side of a client/server arrangement. Equipment for a client side of golf analysis systems of this type according to at least some aspects of this invention may include: (a) an input system for receiving from a golf data hub: (i) golf statistical data relating to play by a first golfer and (ii) golf analysis information, wherein the golf analysis information received from the golf data hub includes statistical information for at least a first portion of a community of golfers; (b) an output system for transmitting golf play data from the first golfer to the golf data hub; and (c) a display system for displaying information to the first golfer, wherein the information displayed to the first golfer includes statistical information for the first golfer and statistical information for at least the first portion of the community of golfers. Such systems may allow generation of displays including any of the information and data (and any combination thereof) described above. The information displayed by the display system may include information to allow a comparison of the first golfer's golf statistical data with corresponding data from at least one other member of the community of golfers, including, for example, all golfers in the community, golfers having a handicap within a predetermined range of a handicap of the first golfer, golfers within a user defined sub-community, one or more specific individual golfers, golfers (optionally of a similar skill level or designated individuals) that have played the same course, etc. This golf analysis system may be provided, in at least some examples of this invention, on a portable electronic device or a personal computer device operated by the first user, optionally during the course of a round of golf. Optionally, if desired, the input system may receive user input indicating one or more statistics for inclusion in the comparison provided to the output system. As another potential option, the output system may provide comparisons of statistical information for a plurality of different golf statistics (optionally user selectable golf statistics). Another golf community aspect of this invention relates to the ability of members of the community (or other third parties) to interact with one another, optionally while at least one member is playing golf. For example, using the community aspects of systems and methods according to at least some examples of this invention, one player (or other entity) can set up challenges for another player. While any desired type of challenge can be provided, some examples include challenges involving one or more of the following: a longest drive contest; a best 9 hole gross score; a best 18 hole gross score; a best 9 hole net score to handicap; a best 18 hole net score to handicap; a best score on an individual hole; most rounds played within a predetermined time period; lowest handicap by a specified date; greatest improvement in handicap over a prescribed time or number of rounds; a longest drive on a specified golf hole; a best 9 hole net or gross score on a specified golf course; a best 18 hole net or gross score on a specified golf course; a race to a predetermined number of rounds played; a race to a specific statistical level of a golf statistic; and greatest improvement in a specified golf statistic over a prescribed time period or number of rounds. Additionally or alternatively, if desired, one member of the community can provide encouragement, consolation, or other message information for another player, optionally, during the course of a round. Some more specific examples of golf community systems according to this aspect of the invention may include: an input system for receiving: (a) golf statistical data from a community of golfers including at least a first golfer, and (b) data including golf challenge information (or other communication information) from a third party for receipt by the first golfer; and an output system for transmitting information to the first golfer, wherein the information transmitted to the first golfer via the output system includes data including the golf challenge (or other) information or data derived from the golf challenge (or other) information. Such systems further may include a processing system programmed and adapted to compare golf scoring or statistical data input from the first golfer with data relating to the golf challenge information input by the third party to determine a result of the golf challenge, and the output system may then further transmit information relating to the result of the golf challenge to the first golfer and/or to the third party. The above described community system may be present predominantly or exclusively on a server side of a client/server arrangement. Equipment for a client side golf analysis system of this type according to at least some aspects of this invention may include: an input system for receiving: (a) golf statistical data from a first golfer and (b) data including golf challenge (or other) information from a third party; an output system for transmitting golf play data from the first golfer to a golf data hub that stores golf statistical data for a community of golfers including the first golfer; and a display system for displaying information to the first golfer, wherein the information displayed to the first golfer includes the golf challenge (or other) information or information derived from the golf challenge (or other) information. This system may further include a processing system programmed and adapted to compare golf play data or golf statistical data from the first golfer with data relating to the golf challenge information received through the input system to determine a result of the golf challenge. This golf analysis system may be provided, in at least some examples of this invention, on a portable electronic device or a personal computer device operated by the first user, optionally during the course of a round of golf. If desired, systems and methods according to at least some examples of this aspect of the invention may receive input regarding a location of the first golfer, e.g., using a global positioning satellite system or using golf scoring information added as the round of golf progresses. In such systems, the display system may be triggered to display the golf challenge (or other) information or the information derived from the golf challenge (or other) information based on information regarding the location of the first golfer. Additional features of this aspect of this invention relate to methods of operating and/or using the golf community and/or analysis systems described above, e.g., to provide challenge or other information to a player from a third party (optionally, from another member of the golf community). Such methods may include at least some steps performed by a computer (such as receiving input data, transmitting output data, collecting sensor data, storing data, processing data, generating output, displaying output, etc.). Aspects of this invention also may relate to hardware and steps performed by systems and methods of the invention in a client/server based computer arrangement, and features of the invention may be practiced solely at the client side, solely at the server side, or at both the client and server sides. Still additional aspects of this invention may relate to computer readable media that include computer executable instructions stored thereon for operating the hardware systems and/or performing the methods described above (and described in more detail below). 3. Golf “Swing Signature” Aspects and Features Various aspects of this invention relate to aspects and features of storing and using data relating to various features of an individual golf swing, e.g., in terms of a “golf swing signature” and/or a “composite golf swing signature.” An individual golf swing signature or composite golf swing signature for a player may be determined, and that determined signature information may be compared against known golf swing signature and composite golf swing signature information in order to provide useful information or feedback to the player. For example, the stored golf swing signatures and/or composite golf swing signatures may be correlated to suggested equipment, equipment parameters, training drills, coaching information, training aids, swing tips, other remedies, and the like. Thus, a new golf swing signature or composite golf swing signature being evaluated may be compared or categorized based on stored golf swing signatures and/or composite golf swing signatures, e.g., for the overall community, and the community systems and methods according to some examples of this invention can then quickly and easily provide this golfer with information (e.g., coaching, training, or fitting information) based on information obtained from the overall community or other sources. Optionally, the information provided to the golfer may come from other sources of information, such as a teacher, coach, the PGA of America, the USGA, the PGA Tour, other professional tours, etc. Golf swing signatures can be used in golf community systems and methods in various ways in accordance with this invention. For example, as noted above, golf swing signatures can be used to provide coaching or training information, to provide golf club fitting information, to provide golf club parameter adjustment or change information, to provide golf equipment recommendation information (such as golf club model, golf club specification, golf ball model, etc.), etc. Such golf swing analysis systems and methods may include, for example: (A) a storage system for storing data relating to at least one of: (i) golf swing dynamics information for a plurality of individual golfers in a community of golfers, (ii) golf swing signatures for at least some of the plurality of individual golfers in the community of golfers, or (iii) a plurality of composite golf swing signatures for the community, wherein each composite golf swing signature for the community is representative of golf swing signatures of a subset of one or more golfers in the community of golfers; (B) an input system for receiving input data relating to one or more golf swings of a first golfer, wherein the input data includes at least one of: (i) golf swing dynamics information relating to one or more golf swings made by a first golfer, (ii) golf swing signatures for one or more golf swings made by the first golfer, or (iii) a composite golf swing signature for the first golfer, wherein the composite golf swing signature for the first golfer is developed based on one or more golf swings made by the first golfer; and (C) an output system for transmitting information to the first golfer (or others, such as a trainer or coach, club fitter, etc.), wherein the information transmitted to the first golfer via the output system includes at least one of: (i) golf equipment recommendation information, (ii) golf equipment parameter change information, and (iii) golf training or coaching information, wherein the information transmitted to the first golfer is determined, at least in part, from the input data relating to the golf swing(s) of the first golfer. The input system for this type of golf swing analysis system also may build up a library of golf swing dynamics information and data, e.g., as more and more users join the community. Thus, the input system further may receive input data relating to golf swings of the community including at least one of: (a) the golf swing dynamics information for the plurality of individual golfers in the community of golfers, (b) the golf swing signatures for at least some of the plurality of individual golfers in the community of golfers, or (c) the plurality of composite golf swing signatures for the community. If raw golf swing dynamics information is received at the input, systems and methods according to at least some examples of this invention may include a processing system for converting the golf swing dynamics information or otherwise generating a golf swing signature for each swing and/or a composite golf swing signature for the individual. Community based systems and methods according to at least some examples of this invention also may store golf equipment information for individual golfers that may be correlated to one or more of: the golfer's swing dynamics information, the golfer's golf swing signature(s), and/or the golfer's composite golf swing signature. In this manner, when users with similar golf swing dynamics and/or golf swing signatures (including composite golf swing signatures) are identified, one player may be able to benefit from knowing the equipment choices of the other player (and/or the community system may suggest equipment for one player based on the equipment used by another player with the same or similar swing dynamics and/or golf swing signatures (including composite golf swing signatures)). Furthermore, changes in golf equipment used by the player may be stored in the community system, as well as changes in golf score or handicap after changes in the golf equipment, and an individual golfer and/or others in the community may benefit from the knowledge of the impact of an equipment change on a player's score or handicap. Additional features of this aspect of this invention relate to methods of operating and/or using the golf community and/or analysis systems described above, e.g., to provide golf equipment recommendation information, golf equipment parameter change information, and/or golf training or coaching information, wherein the information transmitted to the first golfer is determined, at least in part, based on data collected from others within a golf community. Such methods may include at least some steps performed by a computer (such as receiving input data, transmitting output data, collecting sensor data, storing data, processing data, generating output, displaying output, etc.). Aspects of this invention also may relate to hardware and steps performed by systems and methods of the invention in a client/server based computer arrangement, and features of the invention may be practiced solely at the client side, solely at the server side, or at both the client and server sides. Still additional aspects of this invention may relate to computer readable media that include computer executable instructions stored thereon for operating the hardware systems and/or performing the methods described above (and described in more detail below). Additional features of this invention relate to computer readable media that include data structures stored thereon for storing and/or providing access to one or more of: (i) golf swing dynamics information for an individual golfer and/or an individual golf swing; (ii) golf swing signatures for an individual golfer and/or an individual golf swing; and/or (iii) composite golf swing signatures for an individual golfer and/or plural golfers within a community. 4. Foot Force Detection Aspects and Features Additional aspects of this invention relate to golf swing evaluation systems and methods that include dynamic foot force sensing capabilities during a golf swing. Such systems and methods may include or utilize one or more of: (a) a first force sensing system (optionally incorporated into an article of footwear that may have data processing capabilities) for determining forces exerted by one or more areas of a first foot of a user with respect to time over a course of a golf swing; (b) a second force sensing system (optionally incorporated into another article of footwear that may have data processing capabilities) for determining forces exerted by one or more areas of a second foot of the user with respect to time over the course of the golf swing; (c) a memory system for storing data collected by the first and second force sensing systems or data derived from the data collected by the first and second force sensing systems; (d) means for displaying at least one of information indicative of the forces exerted by the first foot of the user with respect to time over the course of the golf swing and information indicative of the forces exerted by the second foot of the user with respect to time over the course of the golf swing; (e) means for comparing: (i) at least one of information indicative of the forces exerted by the first foot of the user with respect to time over the course of the golf swing or information indicative of the forces exerted by the second foot of the user with respect to time over the course of the golf swing with (ii) a standard golf swing foot force profile (which may include preferred weight shift information, preferred center of weight information, etc.); and/or (f) means for determining and/or displaying information indicative of the position of the golf club or a portion of the user's body with respect to time over the course of the golf swing. The foot force sensing systems may determine center of force and/or user weight shift information. When the force sensing system(s) is (are) incorporated into article(s) of footwear, the article(s) of footwear may be of a type that will enable the foot force determinations to be made on a golf course, during actual play of golf (e.g., the article(s) of footwear may have outsoles with golf traction elements integrally formed therein or engaged therewith). Other example golf swing evaluation systems and methods in accordance with at least some examples of this invention include one or more of the following: (a) a first force sensing system for determining forces exerted by one or more areas of a first foot of a user with respect to time over a course of a golf swing; (b) a golf swing dynamics sensing system for determining golf swing dynamics information (e.g., club position, body position, club head speed, etc.) generated by the user with respect to time over the course of the golf swing; (c) a memory system for storing data collected by the first force sensing system and the golf swing dynamics sensing system or data derived from the data collected by the first force sensing system and the golf swing dynamics sensing system; (d) means for displaying at least one of information indicative of the forces exerted by the first foot of the user with respect to time over the course of the golf swing and information indicative of the golf swing dynamics with respect to time over the course of the golf swing; (e) means for comparing: (i) at least one of information indicative of the forces exerted by the first foot of the user with respect to time over the course of the golf swing or information indicative of the golf swing dynamics with respect to time over the course of the golf swing with (ii) a standard golf swing profile; and/or (f) means for simultaneously displaying: (i) at least one of information indicative of the forces exerted by the first foot of the user with respect to time over the course of the golf swing or information indicative of the golf swing dynamics with respect to time over the course of the golf swing and (ii) a standard golf swing profile. The foot force sensing system may be provided in a shoe, as part of a golf ball driving range platform (on which the user stands to launch balls), etc. Also, such systems could provide foot force data for both feet, if desired. Additional features of this aspect of this invention relate to methods of operating and/or using the foot force detection systems described above, e.g., to provide golf swing information. Such methods may include at least some steps performed by a computer (such as receiving input data, transmitting output data, collecting sensor data, storing data, processing data, generating output, displaying output, etc.). Aspects of this invention also may relate to hardware and steps performed by systems and methods of the invention in a client/server based computer arrangement, and features of the invention may be practiced solely at the client side, solely at the server side, or at both the client and server sides. Still additional aspects of this invention may relate to computer readable media that include computer executable instructions stored thereon for operating the hardware systems and/or performing the methods described above (and described in more detail below). 5. Additional Hardware Aspects and Features Additional aspects and features of this invention relate to the hardware used in collecting the golf data, e.g., for transmission to the data hub and/or other uses. One more specific example of this aspect of the invention includes golf swing evaluation systems that include one or more of: (a) a golfer positioning system for providing information regarding an initial stance location with respect to at least one of an initial ball launch location or a desired ball flight direction; (b) a first sensor system provided at a first location for detecting golf swing dynamics information generated during golf swings (e.g., forces exerted by one or more of the golfer's feet during a golf swing); (c) a second sensor system provided at the first location for detecting ball flight information when golf balls are hit (e.g., a ball launch monitor); (d) a transmission system for transmitting data to a swing analysis system provided at a location remote from the first location (e.g., a central golf data hub), the data transmitted by the transmission system corresponding to or being derived from the information collected by the first and second sensor systems; (e) an output system for providing golf swing feedback or analysis information at the first location, wherein at least some of the golf swing feedback or analysis information provided at the first location is generated by the swing analysis system or derived from data generated by the swing analysis system; and/or (f) an alignment system (e.g., for providing information regarding a preferred golf ball flight direction, for providing information regarding a golf ball start location with respect to a location at least some portion of the second sensor system, for providing information to assure that at least one of the first sensor system or the second sensor system is properly positioned with respect to at least one of a first user stance position or a golf ball start location, etc.). Systems and methods of this type may be provided in or practiced at a golf driving range hitting bay or on the course. The information provided regarding the initial stance location for an individual swing by the golfer positioning system may depend, at least in part, on various factors, such as: the specific golf club being used for that individual swing, one or more physical size characteristics of a person making that individual swing, etc. The information may be provided in a variety of ways, such as by at least one light generating device that projects light to provide the initial stance location information (onto a surface), by at least one laser generating device that projects a laser beam to provide the initial stance location information (onto a surface), by a series of lights visible at a surface on which the first user stands when hitting golf balls, by grid lines on a surface, by other lines on a surface, etc. Additional aspects of this invention relate to golf equipment that has data collection and/or storage capabilities that will, optionally, allow swing data to be collected while the user plays an actual round of golf. Such golf equipment may include, for example, a golf bag including an open ended container for holding a plurality of golf clubs that includes a data input system engaged therewith for receiving data relating to at least one of: (a) golf swing dynamics information generated during golf swings (e.g., foot force data, video camera data, etc.), and (b) ball flight information (e.g., launch monitor data, etc.) when golf balls are hit. As another example, such golf equipment may include a golf cart for transporting golf equipment on a golf course that includes a data input system of the type described above engaged therewith. The term “golf cart,” as that generic term is used herein (and unless otherwise noted) includes both self propelled, motorized golf carts (e.g., gas or electric carts) and user propelled golf carts (e.g., pull carts, push carts, etc.). Such systems may further include: a transmission system engaged with the golf bag or golf cart for transmitting data to a swing analysis system, the data transmitted by the transmission system corresponding to or being derived from the data received by the data input system; a data receiving system engaged with the golf bag or golf cart for receiving golf swing feedback or analysis information generated by or derived from the swing analysis system; an output system for providing a user perceptible output based on the golf swing feedback or analysis information received at the data receiving system; and/or an alignment system engaged with the golf bag or golf cart (e.g., for providing information regarding a preferred golf ball flight direction with respect to the golf bag or golf cart, for providing information regarding a golf ball start location with respect to the golf bag or golf cart, for providing information regarding a user's stance set up location with respect to the golf bag or golf cart, for providing information to assure that at least some portion of the data input system is properly positioned with respect to at least one of a user stance position, a golf ball start location, or a desired initial golf ball flight direction, etc.). The data input systems in systems and methods according to at least some examples of this aspect of the invention may receive data from any suitable sources. In some more specific examples, the data input system will receive data transmitted from a shoe, from a golf club, from an article of apparel, or the like. The input data may include, for example, data relating to the golf swing dynamics information generated during golf swings and/or data relating to the ball flight information when golf balls are hit. Additional potential features of this aspect of this invention relate to methods of operating and/or using the equipment described above, e.g., to provide golf swing information. Such methods may include at least some steps performed by a computer (such as receiving input data, transmitting output data, collecting sensor data, storing data, processing data, generating output, displaying output, etc.). Still additional aspects of this invention may relate to computer readable media that include computer executable instructions stored thereon for operating the hardware systems and/or performing the methods described above (and described in more detail below). Specific examples of the invention are described in more detail below. The reader should understand that these specific examples are set forth merely to illustrate examples of the invention, and they should not be construed as limiting the invention. B. Specific Examples of Systems and Methods According to the Invention The various figures in this application illustrate examples of features of golf swing analysis systems and methods and golf community data hub systems and methods in accordance with examples of this invention. When the same reference number appears in more than one drawing, that reference number is used consistently in this specification and the drawings to refer to the same or similar parts throughout. 1. Example Hardware Useful with Systems and Methods According to Examples of this Invention FIG.1schematically illustrates example features of systems and methods according to this invention. As shown inFIG.1, the golfer (Player A,100) makes golf swings, and data and/or other information relating to various aspects of the swings are captured by one or more sensors (three sensors102a,102b, and102care shown in the example ofFIG.1). The sensed data and information is collected and stored using one or more data collection/recordation devices104, and is optionally processed (e.g., by a computer processing system including one or more microprocessors or other processing resources) before being transmitted to a central data hub108by a data transmission system106. At the central data hub108, the incoming data may be further processed or evaluated, e.g., by appropriate swing analysis software available at or through the central data hub108and/or by a human being (called a “coach” black box110inFIG.1), either or both of which may provide feedback to the golfer100(which includes feedback to the golfer and/or his/her trainer or coach) via an output/feedback device112. The output/feedback information may include various things, such as golf equipment selection or recommendation information, golf equipment parameter adjustment recommendations, golf equipment fitting information, coaching or training drills, swing tips, and the like. All of these example features will be described in more detail below. Optionally, as shown inFIG.1, the central data hub108may be omitted and/or the coach110can be in direct communication with the transmission system106and/or directly provide data to the output/feedback device112. FIG.2Aillustrates more detailed examples of hardware that may be used in systems and methods according to at least some examples of this invention. In this illustrated example system200, golf swing data (such as golf swing dynamics data) is detected by at least two sensors, namely, at least one shoe mounted sensor102a(and optionally a shoe mounted sensor102ain each shoe, e.g., to detect foot force and weight shift features of the golf swing) and a golf club mounted sensor102b(e.g., an accelerometer, gyrometer, magnetometer, force or pressure sensor, and/or other sensor(s) to detect golf club position, velocity, acceleration, and/or ball contact features of the golf swing). Other and/or additional data may be collected without departing from this invention. The sensors102aand102bare equipped with transmission equipment (e.g., wireless transceivers126aand126b, respectively) for transmitting data or other information to a data collection and recordation device104. This data transmission is represented inFIG.2Aby the transmission icons124aand124b. Optionally, if desired, at least some (and potentially all) data processing may take place at the shoe and/or golf club. The data collection and recordation device104may receive input from other sensors, such as a ball launch monitor and/or a GPS or other locational sensor102c, which may be used, for example, to collect data from the golfer and provide information to the golfer on the golf course as a player100plays a typical round of golf. This GPS sensor system102cmay include features and/or functions the same as or similar to those available in golf GPS systems as are conventionally known and used in the art. The data collection and recordation device104may include other features, such as a processing system, a memory (e.g., a flash memory to allow comparisons to others), a power supply (e.g., battery), one or more user input devices120(e.g., hard buttons, touch screen, keyboard, stylus, etc.), and one or more output devices122, such as a screen display122a, an audio output device, a tactile output device (e.g., vibration device), etc. The data collection and recordation device104of this example further includes a transceiver device106afor receiving and transmitting data (e.g., any data or information input into or stored by the device104, including the shoe sensor102a, club sensor102b, or GPS system102cdata), including transmitting data to another computing system, as shown inFIG.2Aby transmission icon130. In the example system200illustrated inFIG.2A, the data collection and recordation device104transmits data from the device104to another computer device132. This computer device132may be any desired type of computer device, such as a personal computer, laptop, palmtop, cellular telephone, workstation, etc. The device132may include other features, including features conventionally known and used on such computing devices132, such as one or more user input devices or other input devices, a power supply, a memory system, a processing system, an output system (such as a display device134having a user interface134aoperating and/or displayed thereon, etc.), etc. The output display device134may display video of the user's swing, optionally with swing data, foot force data, ball launch data, swing tip information, other analysis information or data, and the like, superimposed on the swing video (or otherwise simultaneously displayed with the swing video). Computer device132also may include a transmission system106bfor transmitting data, optionally via a network138over a networked connection (shown as transmission icon136inFIG.2A), to the central data hub108, which may be in communication with a virtual or human coach110or other swing analysis system or personnel. After analysis of the data generated relating to the golf swing(s) has taken place (e.g., at virtual or human coach110), feedback information or data can be returned to the player100(and/or his/her personal coach or trainer), optionally through the central data hub108, e.g., for presentation or display on computing system132and/or display device134. Alternatively, if desired, the feedback information may be transmitted from the hub108and/or the coach110to the computer device132and/or the data collection and storage device104without the need for the feedback information to pass through the network138(e.g., for display on devices122aand/or134aor other appropriate output). Although not necessarily configured in this manner, system200is of a type that will allow a user100to play golf with a portable electronic device104accompanying him or her to collect and record data as a round progresses (alternatively, the device104could be provided as part of a golf cart, a golf bag, or other equipment carried by or for the user100). In this example system200, golf swing dynamics and/or ball flight data (as well as other data, such as scoring data, GPS locational data, etc.) is recorded on device104for later download, e.g., to a personal computer system132provided at the golf course clubhouse, the user's home or office, etc. The user100can then upload the data from computer system132over a conventional network type connection138to the central data hub108, from which further storage, analysis, display, and other options are available (as will be described in more detail below). In this manner, system200may operate in a manner generally similar to the data collection, storage, and analysis features available for collecting, storing, and analyzing ambulatory activity data in the NIKE+™ system, commercially available from NIKE, Inc. of Beaverton, Oregon. FIG.2Billustrates another example system250and method for collecting, storing, and analyzing golf swing data that may be used in some examples of this invention. In this example system250, the intermediate transmission to computing device132is eliminated, and the data collection and storage device104transmits its signals directly to the hub108via network138. The feedback information may be transmitted from the virtual or human coach110, optionally via the hub108, directly to the data collection and storage device104, e.g., for display on output device122(or other appropriate action). Alternatively, if desired, the feedback information may be transmitted from the coach110to the data collection and storage device104(optionally through network138) without the need for the feedback data to pass through the hub108. The output device122need not physically constitute a portion of the data collection and storage device104(e.g., it could be a separate device, such as a separate monitor or display device, a cellular telephone or other communication device, a tactile sensation output device, etc.). This type of system may be more useful and practical to provide real time feedback to the player, e.g., as he or she is playing a round of golf, while at an appropriately equipped driving range bay, etc. As noted above, many different types of data may be collected and used in systems and methods in accordance with examples of this invention. Some useful swing dynamics data may be collected from one or more sensors provided in a golf club.FIG.3schematically illustrates an example golf club300that includes a club head302having one or more sensors304provided therein. Golf clubs having electronic sensors located therein are known and have been described, for example, in U.S. Pat. No. 7,004,848 to Konow; U.S. Pat. No. 6,248,021 to Ognjanovic; U.S. Pat. No. 4,898,389 to Plutt; U.S. Pat. Nos. 7,234,351 and 7,021,140 to Perkins; U.S. Patent Publication No. 2005/0215340 A1 to Stites; U.S. Patent Publication No. 2002/0173364 A1 to Boscha; and U.S. Patent Publication No. 2009/0209358 to Niegowski, each of which is entirely incorporated herein by reference. While a wood-type golf club head302is shown inFIG.3, the club head302may be an iron, a hybrid club, a driver, a fairway wood, a putter, or other desired club head. In accordance with at least some examples of this invention, golf club based sensors304(e.g., one or more accelerometers, impact sensors, force sensors, gyrometers, magnetometers, etc., optionally at least behind the ball striking face) may determine and provide data relating to one or more of: golf club head position throughout the swing; golf club head velocity throughout the swing (including one or more angular velocities); golf club head acceleration throughout the swing (including one or more angular accelerations); golf club head speed at ball impact; golf club head path around ball impact time; golf club head orientation (e.g., effective loft angle, lie angle, or face angle) at ball impact; ball impact location on the face; ball contact area on face during impact; ball contact force; face flex amount during impact; amount of shaft flex; location of shaft flex; gripping force (e.g., from a grip based sensor); other grip features (e.g., finger positioning, etc.); etc. Multiple sensors and/or sensor systems may be provided in a single club without departing from this invention. Golf clubs300and/or golf club heads302that may be used in accordance with at least some examples of this invention may include an output device306, e.g., for transmitting the collected data from the golf club300or club head302to a data collection and recordation device104(e.g., an RFID system). This transmission may be a wired or wireless connection (e.g., using a wireless transceiver, as illustrated inFIG.3, an RFID tag, etc.), and the transmitted data may send any desired content (e.g., swing data, club identifier, impact force, impact location, etc.). As one alternative, if desired, data from the club300or club head302may be transmitted directly to the golf data hub108(or to the user's computer132), rather than to an intermediate data collection and recordation device104. As yet another alternative, if desired, the club300and/or club head302may include a computer processing system (e.g., one or more microprocessors) to allow at least some processing of collected sensor data prior to transmission to another portion of the overall system. As still another example, if desired, the club300and/or club head302may include a data storage system (e.g., computer memory) that will allow the data to be collected for later upload to another portion of the swing analysis system. Other arrangements and data collection, storage, and/or transmission options are possible without departing from this invention. Also, if desired, golf clubs300and/or golf club heads302in accordance with at least some examples of this invention may receive input (e.g., via transceiver device306shown inFIG.3or another input device). This input may be used, for example, to change data collection parameters of the sensor(s)304on the device. As additional examples, if desired, the golf club300and/or golf club head302may function as at least a portion of the output/feedback device112of the general system shown inFIG.1. As some more specific examples, in at least some example systems and methods according to this invention, golf clubs300or golf club heads302may receive input from the virtual or human coach110(e.g., via hub108) with instructions to change one or more physical parameters of the club (e.g., changing the loft angle, lie angle, face angle, face stiffness, face flex characteristics, shaft stiffness, shaft flex location, shaft kick point location, etc.). As still additional examples, in at least some example systems and methods according to this invention, golf clubs300or golf club heads302may receive input from the virtual or human coach110(e.g., via hub108) that induces a sensory response to the user during the course of a golf swing, e.g., in an effort to alter a feature of the user's swing (e.g., to help the club function as a swing training device, to better ingrain new swing features in the user's muscle memory, etc.). For example, the club300or club head302could be configured to vibrate or make an audible sound if the user's swing or if club head positioning is incorrect (e.g., off plane, over the top, excessively outside-to-inside, casted, etc.). The sensory (e.g., vibration) response also could be provided by a separate device held or worn by the player, such as by the footwear, apparel, an electronic device held on the user's belt or in the user's pocket (e.g., a pager, cell phone, etc.), or the like. Other or alternative useful swing dynamics data may be collected from one or more sensors provided in one or more articles of footwear worn by the golfer during the swing.FIG.4schematically illustrates footbed portions402of an example pair of golf shoes400that include one or more sensors404therein. The sensors404may include one or more force sensors that may be used to detect and measure the dynamic force distribution applied by the golfer's feet over the course of a golf swing (e.g., to enable detection of appropriate weight shift, etc.), such as using optical fiber bending (“OFB”) technology, variable electrical resistance, etc. Footwear having sensors located therein have been described, for example, in U.S. Patent Publication No. 2010/0063778 A1 to Schrock, et al., and U.S. Patent Publication No. 2010/0063779 A1 to Schrock, et al., each of which is entirely incorporated herein by reference. In this illustrated example, the footbeds402of the articles of footwear include a series of forefoot sensors404and heel sensors404so that the force applied by the user's feet in various different areas during the golf swing can be determined. Although other arrangements are possible without departing from this invention, in this illustrated example, signals from the sensors404are transmitted to a central data collection and/or processing device406provided in each shoe. This central data collection device406may be formed in a chip that is engaged in a housing provided in the footbed402, e.g., in a manner akin to the manner in which chips are engaged with articles of footwear in NIKE+™ enabled footwear available from NIKE, Inc. of Beaverton, Oregon A fabric layer, sock liner, or insole element may overlay the footbed402in the articles of footwear and directly contact the wearer's foot. Shoes that may be used in accordance with at least some examples of this invention may include an output device408for transmitting the collected data from the shoe to a data collection and recordation device104. This transmission may be a wired or wireless connection (e.g., using a wireless transceiver, as illustrated inFIG.4). As one alternative, if desired, data from the shoe(s)400may be transmitted directly to the golf data hub108, rather than to an intermediate data collection and recordation device104. As yet another alternative, if desired, one or both shoes400may include a computer processing system (e.g., one or more microprocessors) to allow at least some processing of collected sensor data prior to transmission to another portion of the overall system. As still another example, if desired, the shoes may include a data storage system (e.g., computer memory) that will allow the data to be collected for later upload to another portion of the swing analysis system. Other arrangements and data collection, storage, and/or transmission options are possible without departing from this invention. Also, if desired, shoes in accordance with at least some examples of this invention may receive input (e.g., via a transceiver device408shown inFIG.4or another input device). This input may be used, for example, to change data collection parameters of the sensors404on the shoes. As additional examples, if desired, one or both shoes may function as at least a portion of the output/feedback device112of the general system shown inFIG.1. As some more specific examples, in at least some example systems and methods according to this invention, shoes may receive input from the virtual or human coach110(e.g., via hub108) changing one or more physical parameters of the shoe (e.g., changing the midsole stiffness, the footbed flex characteristics, etc., as described, for example, U.S. Published Patent Appln. No. 2007/0006489A1, which document is entirely incorporated herein by reference). As still additional examples, in at least some example systems and methods according to this invention, one or both shoes may receive input from the virtual or human coach110(e.g., via hub108) that induces a sensory response to the user during the course of a golf swing, e.g., in an effort to alter a feature of the user's swing (e.g., to help the user shift his/her weight properly, to get the user off his/her heels at the appropriate time, etc.). For example, the footbed(s)402could be configured to vibrate or make an audible sound if the user's weight shift is incorrect and/or if the user's swing tempo is off. In fact, as shown inFIG.4, the footbed(s)402may include one or more elements410(in the forefoot and/or heel) that project upward to contact (poke) the user's foot during the golf swing if it is determined by appropriate sensors that the user has not properly shifted his/her weight during the swing (e.g., to get the user off his heels, etc.). As noted above, the sensory (e.g., vibration) response also could be provided by a separate device held or worn by the player, such as by an electronic device held on the user's belt or in the user's pocket (e.g., a pager, cell phone, etc.). If desired, golf footwear400in accordance with at least some examples of this invention may include pedometer based sensors or other sensors, e.g., to provide speed and/or distance information relating to the round of golf (e.g., NIKE+ type pedometer sensors available from NIKE, Inc. of Beaverton, OR). If desired, step count/pedometer data of this type may be provided by one or some of the same sensors404used for measuring and determining the foot force information. Other items may be equipped to collect golf swing dynamics information without departing from this invention. For example, as illustrated inFIG.5, appropriate sensors (e.g., accelerometer, force sensors, etc.)502may be provided in a golf glove500or other article of apparel (such as a shirt, pants, shorts, socks, etc.). This type of sensor502may allow the golf glove500to provide hand position and/or hand motion information (e.g., velocity, acceleration, etc.), optionally for comparison against a standard or that of an elite golfer (optionally, a golfer that has similar swing or other characteristics). As another option, this golf glove500may include appropriate sensors502located to measure other features or characteristics of the golf swing, like grip pressure and/or handle location with respect to the golfer's hand(s). Such articles of apparel500also may be equipped with an output device506for transmitting the collected data from the article of apparel to a data collection and recordation device104. This transmission may be a wired or wireless connection, such as using Bluetooth or other transmission protocols (e.g., using a wireless transceiver, as illustrated inFIG.5). Alternatively, if desired, data from the article of apparel500may be transmitted directly to the golf data hub108and/or to the coach110, rather than to an intermediate data collection and recordation device104. As yet another alternative, if desired, the article of apparel500may include a computer processing system (e.g., one or more microprocessors) to allow at least some processing of collected sensor data prior to transmission to another portion of the overall system. As still another example, if desired, the article of apparel500may include a data storage system (e.g., computer or flash memory), that will allow the data to be collected for later upload to another portion of the swing analysis system. Other arrangements and data collection, storage, and/or transmission options are possible without departing from this invention. Also, if desired, articles of apparel500in accordance with at least some examples of this invention may receive input (e.g., via transceiver device506shown inFIG.5or another input device). This input may be used, for example, to change data collection parameters of the sensors502on the article of apparel. As additional examples, if desired, the article of apparel500may function as at least a portion of the output/feedback device112of the general system shown inFIG.1. As some more specific examples, in at least some example systems and methods according to this invention, articles of apparel500may receive input from the virtual or human coach110(e.g., via hub108) changing one or more physical parameters of the article of apparel500and/or inducing a sensory response to the user during the course of a golf swing, e.g., in an effort to alter a feature of the user's swing (e.g., to help the article of apparel500function as a swing training device, to better ingrain new swing features in the user's muscle memory, etc.). For example, the glove500(or other article of apparel) could be configured to vibrate or make an audible sound if the user's hand position (or other body part position) is incorrect at some point during the course of a swing. As one more specific example, as shown inFIG.5, the rear surface of the glove500may include one or more elements508that project inward to contact (poke) the user's hand during the golf swing, e.g., if it is determined by appropriate sensors that the user's wrist is too cupped, too flat, or otherwise not properly positioned during the swing. Similar feedback may be applied to other locations on the body, e.g., using other properly equipped articles of apparel. FIG.6illustrates another example system600that may be used for collecting swing dynamics data or other data for swing analysis systems and methods according to this invention.FIG.6illustrates a portable electronic device602that may be carried by a golfer during a round of golf and provides yardages and/or other information to the golfer relevant to the round of golf (e.g., conventional golf GPS data). This device602also may include one or more inputs604that receive data from various sensors included with the system, such as the golf club based sensors, footwear or other foot force sensors, and apparel sensors as described above and/or other sensors like those described in more detail below. This device602may collect and store data (and optionally further process it) during the course of a round of golf (and optionally provide feedback to the golfer during the round) and send data to another data collection and recordation device104(optionally, after the round is over) for transmission to the central data hub108(seeFIG.1). Alternatively, as shown in the example system700ofFIG.7, the portable device602may function as the data collection and recordation device104that transmits golf swing dynamics and/or other data directly to the central data hub108(e.g., periodically, over the course of a round or after the round is complete). The device602may communicate with the data collection and recordation device104(FIG.6) and/or the central data hub108(FIG.7) in any desired manner using any desired communication protocol, including wired or wireless connections, cellular telephone communications (e.g., 3G, 4G, etc.), other networked connections or protocols, etc. WhileFIGS.6and7show the electronic device602as a portable device that can be carried by the user, this is not a requirement. Rather, if desired, device602may have a more permanent mounting location, such as on a golf cart (self-propelled or user propelled), on a golf bag, in a driving range bay, etc. FIG.8illustrates another example of equipment that may be used in golf swing analysis systems and methods in accordance with some examples of this invention. The example system800ofFIG.8constitutes a more permanently situated arrangement of sensors and devices for collecting golf swing dynamics and/or ball launch data, such as a system that might be found in a golf ball driving range hitting bay or an indoor type driving bay (e.g., for hitting golf balls into a net). This system800includes a mat802on which a user stands when hitting golf balls B from a ball launch area804. The mat802or supports therefor may include one or more sensors806(e.g., sensor arrays) that are capable of determining user weight shift over the course of a golf swing (alternatively, if desired, this type of data may be generated by sensors provided in the user's shoes, as described above). The mat802further may include one or more sensors808(e.g., sensor arrays) in the ball launch area804to detect various features of the swing, such as initial club contact location with respect to the ball location, club head path at or around impact, etc. Swing dynamics data generated by the mat802based sensors806,808may be transmitted to a data collection/recordation device104(e.g., a conventional computer device), shown by connection lines806aand808ainFIG.8. If desired, the system800ofFIG.8also could accept swing dynamics input and data from one or more other sensors, such as golf club based sensors (e.g., seeFIG.3), footwear based sensors (e.g., seeFIG.4), apparel based sensors (e.g., seeFIG.5), and/or electronic device based sensors (e.g., seeFIG.6). This example system800further includes a golf ball launch monitor810that collects ball flight data. Such launch monitor systems800are known and used in the art, and they collect data useful to sense or determine various features of a golf ball launch, such as: initial ball launch angle, initial ball launch speed, initial ball launch spin (e.g., absolute spin (e.g., in RPMs) and/or spin direction), initial ball launch direction, projected or actual ball carry distance, projected or actual ball roll distance, projected or actual ball travel distance, projected or actual ball apex height, projected or actual ball apex location distance, projected or actual ball to ground impact angle, golf club head speed at a ball contact time, golf club head movement path direction at a ball contact time, projected or actual ball flight deviation from center (or from a predefined path), ball flight curvature, smash factor (initial ball launch velocity/club head speed at ball contact), etc. The ball flight sensing system810according to this example of the invention further includes an audio recording device812, such as a digital sound recorder. This audio recording device812may be used to provide useful data for the swing analysis system, such as information regarding the quality of the contact between the club head face and the ball (e.g., solid contact v. more of an off-center or glancing blow type contact), club contact with the mat802before contact with the ball, etc. Such data or information may be useful to a human or computerized swing analyst to determine the quality of an individual ball strike. Ball launch data generated by the launch monitor810and/or the audio recorder812may be transmitted to the data collection/recordation device104, shown by connection line810a,812ainFIG.8. Swing dynamics and/or ball launch data may be collected by other sensing devices without departing from this invention. For example, this swing analysis system800includes one or more video cameras814that video record the golfer's swing and/or ball launch. In this illustrated example, one camera814captures the swing and/or ball launch data from the rear (behind the golfer) and one captures it from a “face-on” position (directly facing the ball and the golfer during the swing). If desired, the face-on camera814may constitute a portion of the ball launch monitor810(e.g., to show the club/ball impact, optionally in a close-up or slow-motion view). Additionally or alternatively, if desired, an overhead camera814may be included to view the swing from directly above the golfer. Image data generated by the video camera(s)814may be transmitted to the data collection/recordation device104, shown by connection lines814ainFIG.8. The swing dynamics and/or ball launch data from the video camera(s)814may be analyzed by a human (e.g., a coach) and/or by swing analysis software, e.g., to provide input data to enable generation of swing tips, training drills, etc., for use by the golfer. FIG.8illustrates yet additional features that may be provided in systems and methods according to at least some examples of this invention. Proper alignment of the golfer and/or ball with respect to portions of the various sensor systems can be important in at least some systems to assure that the data is properly captured and is in a form where it can be properly analyzed. To assure proper capture of the ball launch monitor data, the ball B may be set up for launch within a predetermined area on the mat802(e.g., on a tee, on a spot provided on the launch area804floor, etc.). Additionally or alternatively, systems and methods according to at least some examples of this invention may include one or more alignment aids816to help assure one or more of the following: (a) to assure that the user has information indicating a preferred or target golf ball flight direction (shown by arrow818inFIG.8) to assure proper capture of the data; (b) to assure that the user has information indicating a golf ball start location with respect to a location of at least some portion of the sensor systems (e.g., with respect to the ball launch monitor810, video recording camera(s)814, etc.); and (c) to assure that the user has information regarding a proper stance set up location (e.g., with respect to the ball launch monitor810, video recording camera(s)814, etc.). Additionally or alternatively, alignment aids that provide information to assure that the user has a proper stance set up location also may be used, at least in part, as a training aid to provide coaching information to the user, and this coaching or training information may be returned to the ball hitting bay after swing and/or ball launch analysis via the central data hub108.FIG.8shows two alignment aids816that project light beams816b(e.g., lasers) or otherwise provide an indication of an appropriate location for the user's front foot to start a golf swing (shown by the intersection I inFIG.8). The adjustability of the location of the light beams816bproducing the intersection I is shown inFIG.8by arrows816a. Other ways of providing this type of golfer alignment or positioning information are possible without departing from this invention, such as: by providing a series of lights visible at mat802surface to show where one or both feet should be positioned, by providing a grid or other markings on the mat802surface to show proper foot positioning, by projecting light onto the mat802surface or at the golfer's feet to show proper foot positioning (e.g., from above), etc. If desired, in some example systems and methods according to this invention, the indicated location of the proper stance set up position may be controlled, at least in part, based on one or more characteristics of the golf club being used (e.g., the type of club, the overall club or shaft length, etc.), one or more characteristics of the golfer making the swing (e.g., height, weight, inseam length, fingertip to floor dimension, etc.), and/or one or more characteristics of the shot being hit (e.g., drive, full swing, partial swing, chip, putt, etc.). As further shown inFIG.8, data from the various sensors optionally may be sent to a data collection/recordation device104and from there to a central data hub108for analysis, etc. Alternatively, if desired, the data collection device104may be omitted (or the central hub108may be omitted), and the various sensors may communicate directly with a data analysis location (such as “coach”110, human or virtual). Data may be returned from data collection/recordation hub104and/or the central data hub108, e.g., to provide feedback to the golfer and/or the golfer's coach or trainer (e.g., via output device112, such as a video output, textual output, sensory inducing output (e.g., in a golf club, shoes, apparel, etc., as described above), audio output, etc.). As noted above, the type of output provided may vary widely, such as club selection, club fitting or adjustment information; swing tips; training drills; ball selection information; information to adjust the sensors and/or alignment systems; information to operate sensory inducing output devices; etc. FIG.8shows a relatively fixed system800for providing swing dynamics and/or ball launch data to a swing analysis system and/or central data hub. Such fixed systems are not required.FIGS.9A and9Bshow face-on and overhead views of example swing dynamics and/or ball launch data collection systems900, at least some of which are mounted on a golf cart902. The same reference numbers are used inFIGS.9A and9Bto show the same or similar parts as inFIG.8(and other figures), and a lengthy repetitive description of these same or similar parts is omitted. The system900shown inFIGS.9A and9Bis a rear view system that includes some type of alignment device816to help the user align the cart902in the best position for the various sensors to capture the motion of the golfer and/or the launch of the golf ball B, as well as the data generated by these actions. Any type of alignment device816may be used without departing from this invention, including, for example, a light or laser emitting device, a fixed sight on the cart902through which the cart902is aligned with ball and/or with the desired target direction (e.g., like a rifle telescope), etc. This type of system900and method may be used to obtain data corresponding to one or more golf swings taken during an actual round of golf (which may more reliably show the golfer's true swing and tendencies). This system900further may include an incline determining device904used to determine the incline on which the ball/cart902rests and/or the relative position of the ball B with respect to the player's feet (to detect uphill, downhill, or side hill lies, which may affect the ball flight and proper swing) (this information also may be ascertainable from map data, the video camera814, GPS, or other sensors). If necessary or desired, the sensor devices mounted on the cart902(e.g., camera814, ball launch monitor system, etc.) and/or the mounts therefor may include elements that allow for adjustment and/or fine tuning of the alignment, e.g., to allow the sensor devices to be aligned without the need to move the cart itself. Any such local adjustment and/or fine tuning elements may be provided for this purpose, such as levels, sights, or the like, e.g., like those used on a transit device for shooting a grade, slider channels that allow the overall sensor and/or mount therefor to be moved left or right (or up or down) with respect to the cart902, a shaft for rotating the sensor and/or mount, etc.). These local adjustment and/or fine tuning elements are schematically shown inFIGS.9A and9B(and others) by arrows950. Additionally, if desired, the system900may include one or more feedback devices112(e.g., of any of the types described above), e.g., so that the golfer can get swing tips or coaching information (or other desired information) while the round of golf is on-going (e.g., akin to a “playing lesson”). As noted above, the feedback may be from a live person watching the golfer's swing live or automatically/computer generated. The output device112may be mounted on the cart902or carried by the user (e.g., a cellular telephone, a PDA, a golf GPS, or other device). WhileFIGS.9A and9Bshow the cart902mounted system equipped to communicate directly with the central data hub108(e.g., via a network connection, such as a cellular telephone or other network), this is not a requirement. If desired, the data from the round may be stored, e.g., at collection and recording device104, for later upload and analysis. Also, whileFIGS.9A and9Bshow a self-propelled cart902, similar hardware and equipment could be provided on a golfer propelled “pull” or “push” type cart (the term “golf cart,” when used generically herein, refers to any of these types of carts). FIGS.10A and10Bshow front and overhead views of example swing dynamics and/or ball launch data collection systems1000similar to those described above in conjunction withFIGS.9A and9B, but in this example, the cart902is positioned to receive data from a “face-on” orientation. The same reference numbers are used inFIGS.10A and10Bto show the same or similar parts as inFIGS.8,9A, and9B(and other figures), and a lengthy repetitive description of these same or similar parts is omitted. In the face-on type system1000ofFIGS.10A and10B, some type of alignment device816is provided to help the user align the cart902in the best position for the launch monitor810and/or other sensors to capture the motion of the golfer and/or the launch of the golf ball B, as well as the data generated by these actions. In this example system1000, the alignment device816may be particularly useful to assure proper positioning of the launch monitor810with respect to the ball B launch location, although other types of alignment information also may be provided (e.g., of the types described above). Systems and methods of the types shown inFIGS.10A and10Bmay operate in the same or similar manners to those described above, e.g., like those described in conjunction withFIGS.8,9A, and9B. FIGS.11A and11Bshow face-on and overhead views of example swing dynamics and/or ball launch data collection systems1100similar to those described above in conjunction withFIGS.9A and9B, but in this example, at least some of the equipment for the system1100is mounted on a golf bag1102(which includes a chamber for containing one or more golf clubs1104). The same reference numbers are used inFIGS.11A and11Bto show the same or similar parts as inFIGS.8,9A, and9B(and other figures), and a lengthy repetitive description of these same or similar parts is omitted. Also, in this example system1100, the feedback device112is provided on an electronic device1104carried by the player (although a feedback device could be provided with the bag1102or with other equipment without departing from this invention). Systems and methods of the types shown inFIGS.11A and11Bmay operate in the same or similar manners to those described above, e.g., like those described in conjunction withFIGS.8,9A, and9B. If desired, the system1100ofFIGS.11A and11Bmay be equipped such that when the golf bag1102is set down, support legs1102aextend outward to support the bag1102. At least some of the sensors (like the video camera814and/or the alignment device816) may be located with respect to the golf bag1102such that placing the bag1102on its supports1102aexposes and/or otherwise places those sensors in a proper position for receiving data (and optionally acts to activate these sensors). This feature also can help repeatably and reliably align and position at least some of the sensors with respect to ground level at the time when data is to be taken. If necessary or desired, the sensor devices mounted on the bag1102(e.g., camera814, ball launch monitor system, etc.) and/or the mounts therefor may include elements that allow for adjustment and/or fine tuning of the alignment, e.g., to allow the sensor devices to be aligned without the need to move the bag itself. Any such local adjustment and/or fine tuning elements may be provided for this purpose, such as levels, sights, or the like, e.g., like those used on a transit device for shooting a grade, slider channels that allow the overall sensor and/or mount therefor to be moved left or right (or up or down) with respect to the bag1102, a shaft for rotating the sensor and/or mount, etc.). These local adjustment and/or fine tuning elements are schematically shown inFIGS.11A and11B(and others) by arrows1150. FIGS.12A and12Bshow front and overhead views of example swing dynamics and/or ball launch data collection systems1200similar to those described above in conjunction withFIGS.11A and11B, but in this example, the bag1102is positioned to receive data from a “face-on” orientation. The same reference numbers are used inFIGS.12A and12Bto show the same or similar parts as inFIGS.8through11B(and other figures), and a lengthy repetitive description of these same or similar parts is omitted. In the face-on type system1200ofFIGS.12A and12B, some type of alignment device816is provided to help the user align the bag1102in the best position for the launch monitor810and/or other sensors to capture the motion of the golfer and/or the launch of the golf ball B, as well as the data generated by these actions. Systems and methods of the types shown inFIGS.12A and12Bmay operate in the same or similar manners to those described above, e.g., like those described in conjunction withFIGS.8through11B. FIGS.9A through12Billustrate various systems in which cameras and/or other sensors may be moved on the golf course following the golfer as he/she plays. Alternatively, if desired, the golf course could be equipped with cameras and/or other sensors at various locations around the course (e.g., on poles behind tees, on yardage markers, behind greens, etc.), and at least some of the swing dynamics and/or ball flight information may be provided by such golf course oriented devices. Such a system could allow playback of a round (or portions thereof) to any players, regardless of their relationship to a central golf hub community and/or their desire to obtain swing feedback information. The systems ofFIGS.9A through12Balso may include GPS monitoring capabilities so that the player's location (and optionally shot distance or other information) can be tracked by GPS. FIG.13Aschematically illustrates an example of the type of data that may be generated by foot force sensors (e.g., in shoes400or in a mat802) during the course of a golf swing. In this example, the upper graph inFIG.13Ashows example foot forces exerted by the left foot of a right handed golfer during the course of a golf swing, and the bottom graph shows example foot forces exerted by the right foot of a right handed golfer during the course of the same golf swing. The dashed line represents forces measured by a toe oriented sensor and the solid line represents forces generated by a heel oriented sensor. The foot forces also may be correlated to and displayed to show the timing of various portions of the swing, such as the start of the backswing, the top of the backswing, ball contact, and the end of the swing (follow through). Foot force information of this type may be useful in systems and methods according to at least some examples of this invention in ascertaining characteristics of the golfer's typical swing (or in ascertaining a golf swing signature or composite golf swing signature for the golfer) or swing tempo. FIG.13Billustrates another example of the type of data that may be generated and/or stored using foot force sensors and/or other sensors during the course of a golf swing in accordance with at least some examples of this invention. In this example, movement of the golfer's center of gravity is tracked throughout a swing, from the swing start (where the center of gravity may be relatively centered), through the backswing (as the center of gravity and the user's weight may tend to shift rearward, predominantly on the rear foot), through the downswing up to ball contact (as the center of gravity and the user's weight tends to shift frontward), and through the follow through (where most (if not all) of the user's weight is on the user's front foot). Representations of the user's feet are shown inFIG.13Bmerely for context (the location of the user's center of gravity during the swing (as shown by the CG line inFIG.13B) will not necessarily correspond to the specific relative body position shown by the representation of the user's feet inFIG.13B). Center of gravity information of this type may be useful in systems and methods according to at least some examples of this invention in ascertaining characteristics of the golfer's typical swing (or in ascertaining a golf swing signature or golf composite swing signature for the golfer) or swing tempo, such as to identify improper weight shifts (a reverse “C,” a casting motion, etc.). Data of the type shown inFIGS.13A and13Balso may be useful in systems and methods according to this invention in other ways as well. For example, as shown inFIG.14, systems and methods according to at least some examples of this invention may provide this type of dynamic foot force information to a golfer (or a golfer's coach) (e.g., weight shift and/or center of pressure on each foot information, center of gravity information, etc.) to allow a comparison of the data generated by that golfer against corresponding foot forces (or other data) generated during a “standard” swing (or target swing), e.g., a swing by a better player. In working with a player, a golf coach might identify one or more other golfers having similar characteristics to the golfer being taught (e.g., using swing data stored in a golf community data hub, as will be described in more detail below). For example, a golf coach may find another player (optionally from stored information in a community hub library) that has similar height, weight, swing tempo, size dimensions (e.g., inseam length, fingertip to ground length, etc.), swing speed, general swing type, etc. If this “other” player has a better swing than the player being taught (e.g., if the other player is a professional, an elite player, a low handicap player, etc.), the player being taught might benefit from making efforts to copy the swing dynamics of this other, better player. Therefore, if an output device112provides the player being taught with data comparing his or her foot force data or center of gravity motion data (Player A inFIG.14) with this better player's foot force data or center of gravity motion data (the “Standard” data shown inFIG.14), the player being taught can better make efforts to try to mimic the foot force data or center of gravity motion data generated by the better player and/or will better know when they have been successful. Being able to see (through the output device112) when one better mimics the foot and/or weight shift action of the better player, and being able to mentally correlate this improved movement with the “feel” of the swing, will allow the player being taught to better develop muscle memory of the better or improved swing feel. Such comparative data can help the player improve in one or more areas (e.g., hit it longer or straighter, eliminate a hook or slice, develop better ball flight control, improve swing repeatability, etc.). WhileFIGS.13A and14show dynamic foot force data during the golf swing, the same or similar data may be generated for other features of the golf swing, such as hand or arm positioning data (e.g., using a glove or shirt based sensor); shoulder turn or positioning data (e.g., using a shirt based sensor); club or club head positioning, velocity, or acceleration data (using golf club based sensors); grip pressure; center of gravity location data (e.g., as shown inFIG.13B); etc. Such dynamic data may be used in the same or similar manners to the foot force data described above. In addition to the various systems and methods described above, additional aspects of this invention relate to computer-readable media, including computer-executable instructions stored thereon, for operating the various systems, performing the various methods, and/or collecting the various types of data described above. 2. Example Community Data Hub Aspects of this Invention As noted above, various aspects of this invention relate to systems and methods for storing and allowing access to golf data for a community of golfers (also referred to herein as a “central data hub” or simply a “community”). In at least some examples of this invention, the central data hub or golf community allows users (or members) to upload golf data (e.g., data relating to one or more specific rounds of golf, golf swing data, etc.) for storage at a centralized location, and this centralized location may be accessed to provide information back to that golfer, as well as to provide more global information relating to rounds played by plural golfers within the community. The information accessible to others within the community may be filtered or controlled in any desired manner, e.g., to enable access to anyone's data; to enable access to anyone's data but in an anonymous manner; to enable access to designated third party data (e.g., to a sub-group of designated “friends”), optionally, after obtaining both party's consent; to enable access based on skill level; to enable access based on the course(s) played; etc. In some example systems and methods according to this invention, data for an individual and from others may be stored and accessed in a manner similar to the way in which ambulatory activity data is stored and accessible on the NIKE+™ system, commercially available from NIKE, Inc. of Beaverton, Oregon. One feature of golf community data hubs in accordance with at least some examples of this invention relates to the ability for users to upload, store, and access golf scoring data for their individual rounds of golf.FIG.15Ashows an example user interface screen1500that a user might see when looking at his/her golf scoring data for a round of golf on his/her computer (e.g., display device122aand/or134ainFIGS.2A and2B). More specifically, as shown inFIG.15A, the user interface1500of this example displays a scorecard1502for a specific, individual round of golf. The interface1500displays various information regarding the specific round, such as the golf course played, the date, hole-by-hole scoring, par information, and hole handicap information. Other data for the course could also be displayed, such as hole yardages (optionally, from the specific tees used for the round), course slope, course rating, etc. This example scorecard1502also displays other scoring information and statistics for this individual player in this round, like fairways hit; whether fairways were missed short, long, left, or right; greens hit in regulation; the number of putts taken per hole; penalty shots assessed; bunker shots taken; total length of putts made over the round; distance to the pin on approach shots; length of approach shots; etc. This input data may be collected and ascertained based on data manually input into the system and/or from data automatically recorded during a round of golf, e.g., using the electronic sensors included with one or more of the clubs, GPS data, etc. This base data also may be used to calculate and display other statistics relevant to the golfer, such as: an average golf score for the golfer on an individual hole over a plurality of times playing the individual hole; an average golf score per round for the golfer over a plurality of rounds of golf; an average number of fairways hit from a tee shot per round by the golfer over a plurality of rounds of golf; an average number of fairways missed left from a tee shot per round by the golfer over a plurality of rounds of golf; an average number of fairways missed right from a tee shot per round by the golfer over a plurality of rounds of golf; an average number of fairways missed short from a tee shot per round by the golfer over a plurality of rounds of golf; an average number of fairways missed long from a tee shot per round by the golfer over a plurality of rounds of golf; an average number of greens hit in regulation per round by the golfer over a plurality of rounds of golf; an average number of putts played per round by the golfer over a plurality of rounds of golf; an average number of putts played per green hit in regulation by the golfer over a plurality of rounds of golf; an average number of sand saves made per round by the golfer over a plurality of rounds of golf; an average number of penalty strokes incurred per round by the golfer over a plurality of rounds of golf; an average number of times making a score of par or better by the golfer when missing a green in regulation over a plurality of rounds of golf; an average total length of putts made over a plurality of rounds of golf; average distance to the pin for various length approaches; distances for each club; the number of times each club was used; etc. Additional statistics of this type may be accessed, for example, by user interaction with the “Last Round” icon1504(to see data (optionally in a comparative manner) for the golfer's last round), the “More Stats” icon1506(to see data (optionally comparative manner) for other rounds by the golfer), and/or “Compare Other Times” icon1508(to see data (optionally in a comparative manner) for the golfer's last time(s) playing this specific golf course) on the interface screen1500. Optionally, if desired, systems and methods according to at least some examples of this invention may receive user input indicating one or more statistics for inclusion in the comparison provided to the output system and/or displayed on the interface screen1500(e.g., by interacting with the “More Stats” icon1506). As another potential option, the output system and/or interface screen1500may provide comparisons of statistical information for a plurality of different golf statistics (optionally user selectable golf statistics). As some more specific examples, any of the various statistics described above (or combination thereof) may be selected by the user and/or displayed on interface screen1500. This type of data also may be submitted to the community data pool to enable additional data calculations, including, for example: an average golf score for some portion of the community of golfers on an individual golf hole; an average golf score for some portion of the community of golfers on an individual golf course; an average number of fairways hit from a tee shot per round by some portion of the community of golfers on an individual golf course; an average number of fairways missed left from a tee shot per round by some portion of the community of golfers on an individual golf course; an average number of fairways missed right from a tee shot per round by some portion of the community of golfers on an individual golf course; an average number of fairways missed short from a tee shot per round by some portion of the community of golfers on an individual golf course; an average number of fairways missed long from a tee shot per round by some portion of the community of golfers on an individual golf course; an average number of greens hit in regulation per round by some portion of the community of golfers on an individual golf course; an average number of putts played per round by some portion of the community of golfers on an individual golf course; an average number of putts played per green hit in regulation by some portion of the community of golfers on an individual golf course; an average number of sand saves made per round by some portion of the community of golfers on an individual golf course; an average number of penalty strokes incurred per round by some portion of the community of golfers on an individual golf course; an average number of times making a score of par or better when missing a green in regulation by some portion of the community of golfers on an individual golf course; an average golf score for some portion of the community of golfers for a round of golf; an average number of fairways hit from a tee shot per round by some portion of the community of golfers; an average number of fairways missed left from a tee shot per round by some portion of the community of golfers; an average number of fairways missed right from a tee shot per round by some portion of the community of golfers; an average number of fairways missed short from a tee shot per round by some portion of the community of golfers; an average number of fairways missed long from a tee shot per round by some portion of the community of golfers; an average number of greens hit in regulation per round by some portion of the community of golfers; an average number of putts played per round by some portion of the community of golfers; an average number of putts played per green hit in regulation by some portion of the community of golfers; an average number of sand saves made per round by some portion of the community of golfers; an average number of penalty strokes incurred per round by some portion of the community of golfers; an average number of times making a score of par or better when missing a green in regulation by some portion of the community of golfers; etc. The “portion” of the community for which data may be made available includes, but is not limited to: the entire community (optionally only those giving permission to use their data); a user designated group within the community (e.g., designated “friends”); those with similar handicap or skill levels; those with similar golf swing signatures or composite golf swing signatures (as will be described in more detail below); specified individuals; for rounds on the same course (optionally using the same set of tees); etc. Additional statistics of this type may be accessed, for example, by user interaction with the “View Others” icon1510on the interface screen1500, which may activate a pop-up menu or other interface element to allow the user to further select the type of other data desired, such as data for all players, data for all players on this course, data for all players with a similar handicap, data for all players with a similar handicap on this course, data for a selected group of one or more identified “friends,” data for a selected group of one or more identified “friends” on this golf course, etc. Systems and methods according to examples of this invention may store, track, and maintain data relevant to any desired statistic, like the statistical data tracked for PGA professionals (e.g., like the data or individual statistics compiled by the SHOTLINK® system (SHOTLINK® is a registered trademark owned by the PGA Tour, Inc. of Ponte Verde Beach, FL)). Optionally, systems and methods according to at least some examples of this invention may accept user input, e.g., audio input, video input, picture input, textual input, etc. This input information (e.g., a user's comments) may be linked, for example, to a specific shot, a specific hole, a specific club being used, a specific geographical location (e.g., via the GPS), etc. The user can then access this input at a later time, e.g., when analyzing his/her play, the next time he/she plays the same hole, the next time he/she plays a similar hole, the next time he/she uses that same club, etc. Any desired type of information may be input, such as advice on playing the hole, a reminder of a swing tip for that club, a reminder of an aiming point for the hole, club selection advice, a reminder of previous success on the hole, etc. Optionally, if desired, a player can make his/her comment or other information available to others, e.g., others in the community, other designated “friends,” other subscribers to a service, etc. The interface1500also may allow a user to identify and select the specific round scoring data to be displayed, e.g., by interaction with the “Change Round” icon1512(which may activate a drop-down menu or other interface item from which the user can select the specific round for display). From the example interface screen1500shown inFIG.15A, users can also activate a more direct comparison of their play, on this individual course, with that of one or more “friends” through interaction with the “View Friend” icon1514. Optionally, initial interaction with icon1514may launch some steps and/or interface elements that allow the user to more specifically identify the friend and/or round of interest for viewing. While interaction with this icon1514may induce many different specific reactions by systems and methods according to this invention, in some examples of this invention a new user interface screen1520like that shown inFIG.15Beventually may be displayed. In this user interface screen1520, two scorecards1522and1524are displayed, one scorecard1522showing the original user's scoring and other data and the other scorecard1524showing the “friend's” scoring and other data on the same course. The scoring data for the two parties may have been for concurrent play of the course or for play at different, separate times. This display screen1520allows easy comparison of the two player's rounds, although other ways of displaying the data to allow an easy comparison may be used without departing from this invention. FIGS.15A and15Billustrate another example feature that may be included in systems and methods according to at least some examples of this invention, namely, the “Virtual Play” features (through interaction with icon1526). The Virtual Play icon1526may be used to launch an animation of the play of one or more players over a round of golf.FIG.15Cshows an example animation display screen1540on which the shots from Player A and Friend B are displayed over a map or animation of the course (or portion thereof). By accepting data from golf clubs, player mounted sensors, GPS or other individual sensors for individual shots in a round (e.g., in a manner as described in U.S. Patent Publication No. 2009/0209358 to Niegowski), systems and methods according to at least some examples of this invention can “play back” the rounds, e.g., on a hole-by-hole manner, as shown inFIG.15C. If desired, the play back also may display or include other information, such as the distance of each shot, the club used for each shot, long drive contests, closest to the pin contests, other statistical contests, individual hole score per player, running leader board score for the players, challenge information, any desired statistical information for the players (or others), score against handicap, etc. If necessary or desired, the interface launching the virtual play features also may allow user selection of the specific rounds for the virtual play, e.g., it could automatically use the rounds displayed on interface screen1520or it may allow user selection of other round(s) as well (e.g., if launched from interface screen1500). While the play back may include static or dynamic representations of the various shots that each player took as shown generally inFIG.15C, if desired, the play back also may include display of the shots in a “video game” like manner. More specifically, if desired, the play back may include an avatar or other graphical representation of each individual player shown taking golf swings (optionally on a facsimile representation of the specific golf course being played), so that the play back appears similar to a video game presentation of golf. Various video game type representations of golf shots and rounds of golf are commonly known and commercially available. This feature may allow users to virtually play golf with one another, optionally in an interactive or collaborative setting (e.g., at discrete separate locations (e.g., using WebEx® conferencing or collaboration software systems and methods (available from Cisco Systems, Inc.) or other similar collaboration software systems and methods). Additionally or alternatively, if desired, an individual could play a virtual animation for one round on a golf course against themselves in an earlier round played on the same course. As another potential alternative, if desired, systems and methods according to at least some examples of this invention may provide at least some of the virtual play feedback (e.g., like that shown inFIG.15C) overlaid on satellite images of the golf course (e.g., from a third party source, like Google Earth) and/or using video images of the golfer's actual play on the course (e.g., if the golf course is equipped with video cameras or the player's play is otherwise video recorded, such as by using the video recording systems described above). As another potential option, the virtual playback may use animation for showing much of the golfer's play, but an icon1542or other indicator could be provided for golf shots where actual video of the player's shot has been recorded and is available for playback. The ability to “annotate” one's round with their own comments as described above (e.g., on specific shots or specific locations), via audio, video, textual, or pictorial information, may be very useful in this virtual playback or analysis environment. The community aspects of this invention may allow other types of interaction between members of the community, at least in some example systems and methods according to this invention. For example, as shown inFIG.15A, some example systems and methods may allow users to interact with one another to set up “challenges” for themselves or one another (e.g., via the “Create Challenge” icon1528). A user may decide to create a challenge for themselves and/or others through the community data hub system, and systems and methods according to at least some examples of this invention may display information relating to this challenge to all involved at appropriate times (e.g., before or during a round, while the user is on-line with the community data hub, when approaching a specific hole, etc.).FIG.16shows an example arrangement in which a user is playing golf (e.g., using a golf GPS device or an electronic scorecard device1600), and as the user approaches a certain hole (e.g., as determined by the GPS device or electronic scorecard1600), a previously downloaded or newly acquired challenge from a friend is displayed (in challenge display area1602). The party playing golf may have already been advised of the challenge or its display during the round may be the player's first indication of the challenge's existence. Any desired user interface element(s) may be provided (e.g., in a community data hub generated interface screen) to enable the friend (or other person) to create the challenge. While not necessary, this illustrated example system allows the user to electronically indicate his or her acceptance of the challenge, which may be used, if desired, to trigger systems and methods according to at least some examples of this invention to advise the friend that the challenge has been accepted (optionally, in real time) and/or to advise the friend of the results of the challenge. Various types of challenges may be made without departing from this invention, and, if desired, users may be allowed to create and develop their own parameters to a challenge. Examples of such challenges include, but are not limited to: one or more of the following: a longest drive contest (overall, on a specific hole, average, etc.); a best 9 hole gross score (optionally, on a specific course); a best 18 hole gross score (optionally, on a specific course); a best 9 hole net score to handicap (optionally, on a specific course); a best 18 hole net score to handicap (optionally, on a specific course); a best score on an individual hole; most rounds played within a predetermined time period; most different golf courses played (optionally within a predetermined time period); lowest handicap by a specified date; greatest improvement in handicap over a prescribed time or number of rounds; a race to a predetermined number of rounds played; a race to a specific statistical level of any desired golf statistic (e.g., longest average drive, fewest number of putts, longest made putt, longest total putt lengths made over a round of golf, etc.); most pars or birdies; and greatest improvement in a specified golf statistic over a prescribed time period or number of rounds. While the creation of challenges is described above with respect toFIGS.15A and16, other types of interactions and messages are possible between community members (or other community users) without departing from this invention. For example, as shown inFIG.17, in systems1700and methods according to at least some examples of this invention, users could arrange to send a congratulatory message to a friend (e.g., see message box1702) when the friend achieves some predetermined scoring feat, such as making a birdie or eagle, making a hole in one, making a predetermined scoring goal for 9 or 18 holes, making a sand save, making a long putt, making a long drive, hitting a green from more than a predetermined distance, etc. As another potential option, users could send messages to “trash talk” or to otherwise chide a player when achieving a bad outcome (e.g., like making an “8” or more on a hole, hitting the ball out of bounds, three- or four-putting, etc.). Such interactive communications, particularly if taking place in real time, as the round is being played, may make the round feel more like one is playing with his/her friends. Optionally, if desired, such systems and methods may allow a user to send a reply, such as via email, text message, telephone, etc., optionally, while the round is taking place (e.g., via a user interface or other user input devices provided on the electronic device1700, such as a soft or hard keyboard, etc.). Messages of encouragement or support are not limited to those input by or generated by “friends” within the community. Rather, because the community data hub of systems and methods according to at least some examples of this invention may store data for one or more of the individual player's rounds, it could be programmed and adapted to provide encouragement and support to the golfer as his/her round progresses. For example, as shown inFIG.18A, if the system1800and method determines that a user has a good round going (at least for their typical play or handicap level), it could be programmed and adapted to send a message of encouragement or support at an appropriate time in the round (see dialog box1802). As another example, as shown inFIG.18B, the system could automatically compare the user's round against rounds of a friend on the same course and provide information to the user about their friend's round (see dialog box1804). Challenges and/or congratulatory messages also could be automatically generated, e.g. for any of the various scoring feats described above in conjunction withFIGS.16and17. If desired, users may be given the opportunity to control the type and extent to which messages from a friend and/or automatically generated system messages are presented during a round of golf (e.g., some golfers may prefer not to know where they stand and/or may prefer not to receive this type of information during a round, to avoid putting added pressure on themselves). Community data features of this invention may provide or enable additional features in systems and methods according to at least some examples of this invention, an example of which is illustrated inFIG.19. As will be described in more detail below, systems and methods according to at least some examples of this invention may ascertain and store information regarding a typical ball flight and/or composite golf swing signature information for an individual user, as well as other data, such as typical distances for various clubs, hit quality for various clubs, etc. This type of stored information may be used to provide more golfer specific feedback and data to the golfer as a round is being played.FIG.19shows an example system1900in which a player is given the option to obtain additional “hole information” before or while a hole is played (see dialog box1902). A positive response to this inquiry may launch a display (see dialog box1904) including a “tip” for playing the hole from the community or another. Rather than a generic tip from a professional or the course designer, however, this tip may be derived from stored information in the community hub. Any desired criteria may be used to determine the source for the tip information included in dialog box1904. For example, the information could originate from: another player of similar skill level (similar handicap) that previously played the course; another lower handicap player that tends to hit his/her ball a similar distance to the player using system1900; another player having similar swing speed as the player using system1900; another player having a similar or the same composite golf swing signature as the player using system1900; another player having a similar typical ball flight as the player using system1900; the player using the system1900(e.g., from information downloaded relating to a previous time playing that hole); etc. Systems and methods according to at least some examples of this invention may be designed to allow players to insert their own tips or comments that can be replayed at future times when the hole is played. As another example, if desired, any advice provided (from any source) may take into account, at least in part, the player's previous history on this specific hole (or other similar holes). As another potential feature, if desired, when playing a new or relatively unfamiliar course, systems and methods according to the invention could advise the player when a new hole they are playing has similar properties or features (e.g., yardage, dogleg features, bunkers or other hazard features, etc.) to holes they play more regularly on other courses (e.g., on their home course) and/or provide advice based on this similarity (e.g., aiming points or directions, club selection advice, hole strategy, etc.). If desired, systems and methods according to at least some examples of this invention may provide the user with suggested clubs for use for various shots on the course, taking into consideration the locations of and distances from hazards, pin location, the player's average or typical distance for each club, the player's typical ball flight pattern or composite golf swing signature, the player's typical “miss” or poor shot results with this club and/or from this distance, etc. Aiming points or other suggestions or tips also could be provided. In addition to the various systems and methods described above, additional aspects of this invention relate to computer-readable media, including computer-executable instructions stored thereon, for operating the various systems, performing the various methods, and/or presenting the various user interface displays described above, including these features in an individual system or a community setting. 3. Collection and Storage of Swing Dynamics, Ball Flight, Golf Swing Signature, and Composite Golf Swing Signature Data Aspects of this Invention As noted above, various aspects of this invention relate to collection and storage of swing dynamics information (e.g., weight shift, club position, body position, club motion, etc.) and/or ball flight information (e.g., launch monitor type data), optionally, at least partially at a community data hub. Additional aspects and features of the collection, storage, and use of this data will be described in more detail below. To provide individualized feedback information (such as equipment selection recommendations, equipment adjustment recommendations, swing tips, coaching drills, and the like), systems and methods according to at least some examples of this invention will collect, store, and use golf swing dynamics information, ball flight information, and optionally other information for one or more golf swings made by a player.FIG.20illustrates example steps involved in one potential data collection method2000according to this invention (at least some of the steps identified inFIG.20may be performed using a computer system, such as a personal computer system or other systems of the types described above, and the data may be collected at a hitting bay, as golf is being played, at a sales location, or at another appropriate time, as described above). As a first step S2002, user identification and other data is collected (such as user height, weight, inseam length, fingertip-to-floor length, handicap, current club set information, etc.), e.g., input into a computer system using conventional user input devices, such as a keyboard, mouse, data download, etc. Then, the user makes one or more golf swings (S2004) during at least some of which golf swing dynamics information and/or ball flight information is collected (Steps S2006and S2008, respectively). At appropriate times (e.g., after each swing, as a larger bulk data upload, etc.), the swing dynamics and ball flight data for at least some of the swings may be uploaded to a central golf data hub (S2010). Optionally, if desired, not all swing data need be uploaded. For example, uploaded data could be limited to that for use of certain clubs, for certain user (or coach) selected shots, etc., e.g., to avoid excessive data transfer (and to allow exclusion of certain clearly “bad” data, such as data relating to clearly mishit shots). In some systems and methods according to this invention, swing data for one or more individual swings of a golfer (e.g., swing dynamics information, ball launch information, etc.) will be compared against similar swing data for others in the golf community (S2012) in an effort to locate a “match” or “category” for the golfer's swing with respect to one or more other member(s) of the community (S2014). Depending on the type of output to be generated, the “community of golfers” available for this comparison may be limited to golfers having low handicaps, good scoring capability, recent improvements in handicap or average scoring, etc., so that the feedback sent to the golfer (S2016) relates to information derived from a high quality player. Alternatively, the “community” available for the comparisons at Steps S2012and S2014may involve all members of the overall community so that this current golfer may be matched with others in the community of similar skills and/or swing types. As noted above, any desired output may be generated and/or provided to the golfer (or others) at Step S2016, including, for example, audio, video, textual, or other output (e.g., on a display device); sensory change inducing output (e.g., in shoes or other apparel, in the golf club, in sound produced during a swing, etc.); etc. The output also may include any desired content, such as club or ball fitting information; club or ball selection information; club parameter adjustment information (e.g., changes to face angle, lie angle, loft angle, shaft flex characteristics, etc.); swing tips; swing drills or other coaching information; comparative information regarding the user's swing data and one or more other player's swing data (or the user's own swing data); etc. Collecting and storing a large volume of data for several individual user swings (e.g., complete swing dynamics data and/or ball launch data) may tend to cause data overload, causing some systems and methods to operate slowly or inefficiently. Therefore, systems and methods according to at least some examples of this invention may use data representing a “composite golf swing signature” for one or more of: individual swings (optionally, on an individual club or club type basis), individual players (optionally, on an individual club or club type basis), groups of swings (by one or more players), and/or for groups of players.FIG.21shows example steps in a method2100of collecting information for determining a composite golf swing signature. The initial steps of this method may be the same as or similar to those described above for the method ofFIG.20, so these method steps are labeled using the same step labels as inFIG.20. Once golf swing dynamics, ball launch, and/or other data for one or more golf swings is sent to the central data hub at Step S2010, systems and methods according to this example of the invention may process and analyze the data to develop a “composite golf swing signature” for the input data (Step S2112). Although it will be discussed in more detail below, a composite golf swing signature in accordance with at least some examples of this invention may simply represent or indicate various general characteristics or tendencies of the player's swing and/or the ball flight resulting from the swing. The various composite golf swing signatures may include individual identifications and/or individual swings that fall into one or more of the following categories: (a) slicer, low swing speed; (b) slicer, moderate swing speed; (c) slicer, high swing speed; (d) slicer, very high swing speed; (e) fader, low swing speed; (f) fader, moderate swing speed; (g) fader, high swing speed; (h) fader, very high swing speed; (i) drawer, low swing speed; (j) drawer, moderate swing speed; (k) drawer, high swing speed; (l) drawer, very high swing speed; (m) hooker, low swing speed; (n) hooker, moderate swing speed; (o) hooker, high swing speed; (p) hooker, very high swing speed; (q) straight, low swing speed; (r) straight, moderate swing speed; (s) straight, high swing speed; (t) straight, very high swing speed; (u) club “caster” with low swing speed; (v) club “caster” with moderate swing speed; (w) club “caster” with high swing speed; (x) club “caster” with very high swing speed; etc. These (and/or other) categories may be used as composite golf swing signatures in at least some systems and methods according to this invention. Once the user's composite golf swing signature has been determined (Step S2112), systems and methods according to at least some examples of this invention will provide output to the user based on the determined composite golf swing signature (Step S2114). This output may be of any of the various types described above for Step S2016, including, for example: audio, video, textual, or other output (e.g., on a display device); sensory change inducing output (e.g., in shoes or other apparel, in the golf club, in sound produced during a swing, etc.); etc. The output also may include any desired content, such as club or ball fitting information; club or ball selection information; club parameter adjustment information; swing tips; swing drills or other coaching information; comparative information regarding the user's swing data and one or more other player's swing data; etc. In some example systems and methods according to this invention, the central data hub108may store appropriate output information for users, e.g., a library of swing tips, drills, club parameters, ball parameters, and the like, correlated to the available composite golf swing signatures. Additionally, any output provided may take into account existing player information, such as existing club parameters of the player's current club set, current club adjustment information or settings, etc. As a more specific example, from the golfer identification data, golf swing signature information, and/or composite golf swing signature information, driver club head parameters (such as loft, lie, and face angles; shaft flex characteristics; etc.) may be known. Because these parameters are known, any recommendations for adjustment of these parameters may take into account the existing settings (e.g., the output may recommend changing the club from the current 1° open face angle to a 2° open face angle (or the like)). In this manner, systems and methods according to the invention can avoid suggesting unrealistic, undesirable, or impossible club settings, like avoiding suggestions to set the club face to an extreme open face position (e.g., greater than 2° or 2.5°). Rather, if the existing club is already at a relatively extreme setting, systems and methods according to the invention may predominantly provide output more in the form of swing tips, drills, or the like in an effort to correct or improve the player's swing path, rather than attempting to make ball flight corrections based on the club head parameter settings. Also, as the user's swing improves (as measured by the golf swing signatures and/or ball flight data), systems and methods could automatically provide suggestions for continuing changes to the club parameter settings. Users could also provide input to the system indicating a preference for obtaining advice in the form of swing tips or drill to improve their swing as opposed to changes in the club characteristics (or vice versa). As additional potential examples, if desired, systems and methods according to at least some examples of this invention may be consulted by users prior to a round of golf. For example, the user could input information regarding an approaching round or information may be obtained from another source (such as the course to be played, the tee marker set to be played, the yardage(s), the expected temperature range, the expected wind conditions (e.g., strength, direction, etc.), and systems and methods according to at least some examples of this invention could provide club set recommendations for the approaching round. As some more specific examples, if desired, based on the information input for the approaching round, systems and methods according to at least some examples of this invention may provide recommendations for driver settings (e.g., to bias for high or low ball flight, to bias for right-to-left or left-to-right ball flight, etc.), recommendations for specific clubs to carry (e.g., swap out one or more long irons for more hybrids or vice versa, swap out a long iron or hybrid for a higher lofted wedge, etc.), recommendations for a specific ball, recommendations for specific apparel, etc. Such recommendations also may take into account: (a) recent weather at the course location (e.g., extreme dry, wet, wind, wind direction, etc.); (b) weather predictions (available from public sources over the Internet, etc.); (c) hole set up information (e.g., yardages of individual holes, predominant dogleg directions, most preferable ball flight directions for individual holes (e.g., left-to-right or right-to-left)); (d) typical hazard locations, positions, and/or types (e.g., sand, water, out-of-bounds, etc.); (e) number and/or lengths of forced carries; (f) fairway widths at typical drive distance range; (g) severity of rough; (h) presence or absence of large spans of desert or waste bunkers; (i) prevailing (or predicted) wind direction and/or speed on specific holes; etc. The recommendations also may take into account the user's past performance history, such as distances each club is typically hit, composite golf swing signature information, past performances on this course, past performances on similar holes or courses, etc., as well as play data from other users within the community that have played the course (e.g., those of similar handicap, those with the same or similar composite golf swing signatures, etc.). Such pre-round recommendation information may be particularly useful when playing a new or unfamiliar course. While the above descriptions ofFIGS.20and21involve the use of a central golf hub for data storage and processing, transmission of data to this type of hub is not a requirement in all systems and methods according to this invention. Rather, if desired, the collection, storage, comparisons, and output may be generated by stand-alone computer systems, optionally using golf community or composite golf swing signature data stored locally or downloadable to a client computer or other device. FIGS.22through24show various types of information and/or data structures that may be used for various features of this invention as described above. While the description below includes various data fields and/or groupings of data, those skilled in the art will recognize that other data field structures and/or groupings of data may be used without departing from this invention. FIG.22shows data and/or information (and an optional data structure2200) that may be correlated and/or used for storing information relating to an individual golf swing. Optionally, this data or information (or any desired portions thereof) may constitute a “golf swing signature” for an individual golf swing. The data for this individual swing may include one or more of the following categories of data: (a) player identification information; (b) individual swing identification information; (c) club specification information; (d) swing dynamics information; (e) ball specification information; (f) ball flight/launch information; (g) player handicap information; and (h) general swing type classification information. The above noted general categories of information may include additional data fields (or links to data) that include more detailed information. For example, the “Player ID” data may include one or more of: player's name or other identifier; player's height; player's inseam length; player's fingertip-to-ground dimension; etc. The “club specification” data may include one or more of the following types of information for the club used for this individual swing: club manufacturer, club model, club type (e.g., driver, hybrid, iron, putter, etc.), loft angle, lie angle, face angle, shaft length, shaft type or material, shaft flex, shaft kickpoint location, etc. The “ball specification” data may include one or more of the following types of information for the ball hit during this individual swing: ball manufacturer, ball model, ball compression, and ball construction (e.g., one-piece, two-piece, three-piece, four-piece, five-piece, wound, etc.). The “swing dynamics” information may include any of the golf swing dynamics data mentioned above, such as one or more of the following types of information for the specific swing: right foot dynamic force data, left foot dynamic force data, dynamic club position data (including face orientation data), dynamic club velocity or acceleration data (including angular velocity, yaw, attitude, etc.), body position (hand, torso, shoulder, etc.) data, swing tempo data (e.g., backswing time/down swing time, etc.), swing speed, club path or face angle at ball contact (e.g., square, inside-to-outside, outside-to-inside), swing video data, or a player's self evaluation of the swing or swing contact (e.g., terrible contact=0, best contact=10), etc. The ball flight or launch data may include any of the ball flight data mentioned above, such as one or more of the following types of information for the specific swing (which may be measured, calculated, or estimated): launch angle, launch speed, launch direction, launch spin, carry distance, roll distance, overall distance, deviation from center or desired line, apex height, apex distance, impact/descent angle, ball flight type (e.g., hook, slice, draw, fade, straight, etc.), impact audio, smash factor, etc. The “general swing type classification” information may be determined from the swing data (e.g., by computer or human analysis of the swing dynamics and/or ball flight data or from viewing the ball flight), e.g., by the swing analysis system. Additional data may be included in any of the noted categories and/or any desired amount of data and/or combination of data may be included and stored (and may constitute a golf swing signature for an individual swing). FIG.23shows data structures and categorizations2300that may be used for providing and using composite golf swing signature information. As noted above, using data compiled by a golf community system and method, various general categories of golf swings may be ascertained and maintained as “composite golf swing signatures.” As examples, these general categories of swings may include: (a) slicer, low swing speed; (b) slicer, moderate swing speed; (c) slicer, high swing speed; (d) slicer, very high swing speed; (e) fader, low swing speed; (f) fader, moderate swing speed; (g) fader, high swing speed; (h) fader, very high swing speed; (i) drawer, low swing speed; (j) drawer, moderate swing speed; (k) drawer, high swing speed; (l) drawer, very high swing speed; (m) hooker, low swing speed; (n) hooker, moderate swing speed; (o) hooker, high swing speed; (p) hooker, very high swing speed; (q) straight, low swing speed; (r) straight, moderate swing speed; (s) straight, high swing speed; (t) straight, very high swing speed; (u) club “caster” with low swing speed; (v) club “caster” with moderate swing speed; (w) club “caster” with high swing speed; (x) club “caster” with very high swing speed; these general categories further broken up by handicap ranges; etc. For each category (e.g., for each noted composite golf swing signature), systems and methods according to at least some examples of this invention may store one or more of the following: (a) swing speed range, (b) ball flight type characterization, (c) handicap range, (d) suggested equipment or equipment setting information, (f) swing tips, (g) practice drills, (h) identification of individual players within the community identified as possessing this golf swing signature, and/or (i) club identifier or club type (e.g., driver, fairway wood, hybrid, long iron, short iron, wedge, etc.) information, etc. Of course, additional, different, or other information also may be associated with composite golf swing signatures without departing from this invention. The data for features such as “suggested equipment” or “equipment setting” information, “swing tips,” and/or “practice drills” may include data that may be accessed by systems and methods according to this invention in order to provide feedback to individual golfers determined as corresponding to that composite golf swing signature. For example, for individuals identified as having a specific composite golf swing signature, certain ball, shaft flex, and/or club head specifications might, on average, produce better results, and systems and methods according to at least some examples of this invention could use this type of data structure or data correlation to associate certain equipment specifications to the composite golf swing signature for the purpose of making recommendations. Also, for a specific composite golf swing signature, certain swing tips or practice drills may be useful to enable the player to improve his or her swing, and systems and methods according to at least some examples of this invention could use this type of data structure or data correlation to associate certain tips or drills to the composite golf swing signature for the purpose of making recommendations. Such systems and methods also could suggest changes in apparel, shoes, clubs, club parameter settings, etc. These arrangements make it easy for systems and methods to provide appropriate output information back to the users (e.g., video of tips or drills and the like; pictures and diagrams of better positioning or posture (e.g., body position, club position, club path, etc.)). FIG.24shows an example data structure (or correlation of data)2400that may be used to store information regarding individual golfers within a community in accordance with at least some examples of this invention. As an example, golf swings of each golfer that joins the golf community data hub may be measured and/or the golfer may otherwise provide information of this type to enable his/her interaction with and use of the community systems and methods. As a more specific example, as shown inFIG.24, the data stored and accessible for each golfer in the community may include one or more of: (a) player identification information; (b) player current handicap information; (c) a start date (e.g., when the user joined the community); (d) one or more composite golf swing signature data sets (e.g., for one or more clubs, determined as described above), which optionally may include information regarding a current composite golf swing signature identification, one or more previous composite golf swing signature identifications, a record of changes to the composite golf swing signature identification, specific club information, etc.; (e) individual golf swing signature data for that user (or links to the data relating thereto, such as links to the data illustrated inFIG.22); (f) current golf equipment data (optionally including identification of the clubs typically carried by the user (e.g., see the club specification data inFIG.22), the average distance the user carries or hits that club (optionally limited to full swings and/or swings with acceptable ball contact), ball identification or specifications, etc.); (g) golf equipment change data (optionally including the change date, the old equipment that was changed out, the new equipment brought in, the change in average score (or other relevant statistic, such as driving distance, fairways hit, number of putts, length of putts made per round, etc.) since that change occurred, the change in handicap since the change occurred, etc.); and (h) scoring data per round played (e.g., for use in interfaces like those shown inFIGS.15A through15C). Equipment change data and information may be used in various ways in systems and methods according to examples of this invention. For example, on an individual level, it might be useful for a player (or coach) to know and understand how club or ball changes have affected the player's score (or other relevant statistical data), so they can determine whether an equipment change has had a positive or negative impact. From a more community oriented mindset, this type of equipment change information may be made searchable on systems and methods according to at least some examples of this invention so that one user who is considering a new equipment purchase can determine the practical impact that the same or a similar equipment change has had on other players in the community (optionally, other players with the same composite swing type or other similar swing characteristics, other players at the same general handicap or skill level, etc.). As another option, this type of equipment change data may be automatically accessed by the system, e.g., when providing output information to an individual golfer using the system (e.g., to provide equipment recommendations). As noted above, systems and methods according to at least some examples of this invention may store current golf equipment data for an individual player, optionally including identification of the clubs typically carried by the user and the actual and/or average distance the user carries or hits that club. If desired, the distance information may be stored in a date stamped manner so that users could obtain information regarding the manner in which their performance with the club has changed over time (e.g., improvement in distance of the driver this year v. last year, improvement over my last15rounds, etc.). Any desired statistics of this type (e.g., flight type, etc.) could be time stamped to allow the player to ascertain his or her changes in performance over time. This information may help the player evaluate the effectiveness of lessons, swing changes, equipment changes, and the like. Collection, storage, access to, and use of this body of swing data, including the swing dynamics, ball flight, golf swing signature, and composite golf swing signature information, may have many potential and apparent benefits for players, coaches, and others that use it. As some examples, the data may help one develop a better swing, select equipment and/or equipment parameters that best suit their swings, evaluate swing and/or equipment changes, and/or better understand where their game needs improvement (or how they could most effectively use their practice time to lower their scores). Benefits of at least some aspects of this invention, however, are not limited to those committed to long term use and analysis of the data. As another potential use, aspects of this invention could be used to provide a “quick” club (or other golf equipment) fitting station, e.g., at golf stores, pro shops, and/or other sellers of golf equipment. For example, some customers (for various reasons) may prefer not to take the time or subject themselves to a complete golf club or equipment fitting session. Nonetheless, by taking a sufficient number of swings to enable creation of a composite golf swing signature for that individual (e.g., one or more swings), the individual can benefit from the stored community data, e.g., by obtaining equipment recommendations based on the determined composite swing type (e.g., using the “suggested equipment” field ofFIG.23) and/or swing tips or coaching information correlated to that composite golf swing signature. Optionally, this feature of the invention may operate with the input of certain other data by the user, like handicap, age, typical ball flight, typical 9 or 18 hole score, etc. On the other end of the spectrum, the data collection, analysis, community, and/or golf swing signature aspects of systems and methods according to the invention could be used for a very involved “super fitting” or even in a “golf school” type session. If desired, a long (even multi-day) fitting or swing school session could be developed in which swing data for many (or even all) clubs may be collected for an individual, optionally both in a hitting bay, while playing, etc. The player may discuss at least some of the data with a coach or other professional provided to assist the user and evaluate his/her swing. This extensive collection of data may be used to select, fit, adjust, and fine tune the specifications of all the golf equipment used or purchased by a player, to best fit him or her to their equipment, as well as to help the user develop and ingrain the feel for a better swing. Such data collection and processing systems also may be useful in various manufacturer's golf club fitting stations, systems, and methods, including, for example, the NIKE 360° Custom Fitting™ systems (available through NIKE, Inc. of Beaverton, OR). As noted above, systems and methods according to aspects of this invention rely on data transmissions and communications between various devices. Any desired types of communications are possible without departing from this invention, including infrared transmissions, Bluetooth transmissions, cellular telephone or other radio communications, hard wired connections, networked connections, etc. Appropriate communications and transmission equipment and/or protocols may be provided and used for each portion of the transmission, and such communications and transmission equipment may be readily selected and configured by those skilled in the art. CONCLUSION Of course, many modifications to the golf swing analysis systems and/or methods may be made without departing from the invention. For example, the data collected, its use, and/or its presentation to the users may vary widely without departing from this invention. With respect to the methods, additional steps may be added, various described steps may be omitted, the content of the steps may be changed and/or changed in order, and the like, without departing from the invention. Therefore, while the invention has been described with respect to specific examples including presently preferred modes of carrying out the invention, those skilled in the art will appreciate that there are numerous variations and permutations of the above described structures and methods. Thus, the spirit and scope of the invention should be construed broadly as set forth in the appended claims.
157,900
11857837
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS While the features, devices, methods, and systems described herein may be embodied in various forms, the drawings show and the specification describe certain exemplary and non-limiting embodiments. Not all of the components shown in the drawings and described in the specification may be required, and certain implementations may include additional, different, or fewer components. Variations in the arrangement and type of the components; the shapes, sizes, and materials of the components; and the manners of connections of the components may be made without departing from the spirit or scope of the claims. Unless otherwise indicated, any directions referred to in the specification reflect the orientations of the components shown in the corresponding drawings and do not limit the scope of the present disclosure. Further, terms that refer to mounting methods, such as mounted, attached, connected, coupled, and the like, are not intended to be limited to direct mounting methods but should be interpreted broadly to include indirect and operably mounted, attached, connected, coupled, and like mounting methods. This specification is intended to be taken as a whole and interpreted in accordance with the principles of the present disclosure and as understood by one of ordinary skill in the art. Various embodiments of the devices, methods, and systems disclosed herein include an instrumented resistance exercise device for recording, transmitting, and analyzing a force profile generated by a patient performing a variety of resistance exercises. Examples disclosed herein support remote clinical monitoring of patients via a body area network or home area network configured for use with a health monitoring system. More specifically, the remote clinical data collection and monitoring system includes, in part, the instrumented resistance exercise device provided to patients for exercise and rehabilitation use outside of a clinic or hospital setting. For example, patients may be taught to properly perform exercises using the instrumented resistance exercise device when they see their physician, physical therapist or other health care provider in a controlled clinical or laboratory setting. The patients may be provided an instrumented resistance exercise device to take with them to use while performing a prescribed exercise routine at home or other such non-controlled setting (e.g., outside of clinic). While home-based exercise programs consisting of resistance exercises may be prescribed and encouraged by health care providers, there is no easy way for which the physician, physical therapist or other such health care provider can monitor the patient's progress. Furthermore, it is difficult for patients and health care providers to keep track of performed exercise data such as exercise duration and repetition frequency. Such data may be useful to monitor and evaluate the progress efficacy of such home-based exercise programs. Thus, various embodiments disclosed herein may help address prior limitations of home-based exercise programs. Data collected by the instrumented resistance exercise device may enable health care providers to: review daily physical therapy and/or resistance exercise activity performed remotely; remotely receive data and analyze the quality of exercises performed during physical therapy and/or resistance exercise activity; individualize and/or tailor a physical therapy and/or resistance exercise plan to better meet patient needs; provide encouragement to patients to stay on track with prescribed physical therapy and/or resistance exercise regimens; and provide feedback to patients should exercise performance goals not be met. As used herein, to “tether” refers to enabling a mobile device to communicatively couple with a short range communication device to send and/or receive data and instructions between the mobile device and short-range communication device. For example, a mobile device is tethered to a force sensing assembly of an instrumented resistance exercise device via wireless communication between the force sensing assembly and the mobile device. In such examples, the force sensing assembly may send and receive data and other such instructions to/from the mobile device using wireless communication technology such as Bluetooth® Low Energy (BLE), WiFi®, Ultra-Wide Band (UWB), or other such communication protocol. As used herein, a “resistance device app” and a “resistance device application” refers to a process of interacting with an instrumented resistance exercise device that is executed on a mobile device, a desktop computer, and/or within an Internet browser of a health care provider, a patient, or other such user of the instrumented resistance exercise device. For example, a resistance device application includes a mobile app that is configured to operate on a mobile device (e.g., smart watch, smart phone, a tablet computer, a wearable smart device, etc.), a desktop application that is configured to operate on a desktop computer, or a laptop computer, and/or a web application that is configured to operate within an Internet browser (e.g., a mobile-friendly website configured to be presented via a touchscreen or other user interface of a mobile device or desktop computer). As used herein, a “network” and a “body area network” refers to a wired and/or wireless communication connection between components and devices of an instrumented resistance exercise device and a remote clinical data collection and monitoring system. For example, a short-range wireless communication device, a mobile device, a desktop computer, a remote data server and/or other such device are configured to operate within the body area network. As such, the short-range wireless communication device, the mobile device, the desktop computer, the remote data server, and/or other such device are configured to send and receive data and other such communicated information between one another using the body area network. Turning to the figures,FIGS.1,2,3,4A,4B,8A,8B, and8Cillustrate one exemplary instrumented resistance exercise device100. In this illustrated example, the instrumented resistance exercise device100includes: a resistance band120; a first handle130suitably attached to the resistance band120; a second handle140suitably attached to the resistance band120; and a force sensing assembly150suitably attached to the resistance band120and first handle130. While the illustrated examples show the force sensing assembly150connected to the first handle130it should be appreciated that the force sensing assembly150may alternatively be attached to the second handle140. Furthermore, in various embodiments the instrumented resistant exercise band120may include a plurality of force sensing assemblies150with at least one force sensing assembly150attached to each of the first handle130and the second handle140. In the illustrated example, the resistance band120includes a first end122and a second end124. As such, the first handle130is suitably attached to the resistance band120at the first end122. Furthermore, the second handle140is suitably attached to the resistance band120at the second end124. The first handle130includes a handle grip member132defined on a handle attachment member134. In various embodiments, the handle attachment member134is configured to attach or otherwise connect the first handle130to the first end122of the resistance band120. The second handle140includes a handle grip member142defined on a handle attachment member144. In various embodiments, the handle attachment member144is configured to attach or otherwise connect the second handle140to the second end124of the resistance band120. Accordingly, a patient or other such user may hold onto the handle grip members132and142while performing exercises that use the resistance band120. In certain embodiments, the resistance band120of the instrumented resistance exercise device100is a length of tubing, such as elastic tubing. In such embodiments, the instrumented resistance exercise device100includes a plurality of different resistance bands120associated with different levels of resistance (e.g., lower resistance or greater resistance). The specific resistance band120to attach to the first and second handles130and140may be selected based on a desired amount of resistance while performing exercises. In certain other embodiments for any of the aspects described herein the resistance band120is a flat band. In the illustrated example, the force sensing assembly150is attached to the first end122of the resistance band120and positioned on or adjacent to the handle attachment member134of the first handle130. As such, the force sensing assembly150is configured to monitor, detect, and measure a force applied between the first end122of the resistance band120and the first handle130when the instrumented resistance exercise device100is in use. In certain embodiments, the force sensing assembly150includes a force sensing device; and a processing and communication module158communicatively coupled to the force sensing device151. As best illustrated inFIGS.2to4B, the force sensing device151includes: a top force plate152; a bottom force plate154axially spaced apart from the top force plate152; a force transducer or force sensor156attached to a top surface of the bottom force plate154, the force sensor156communicatively coupled to the processing and communication module158; a first standoff160aattached to the top surface of the bottom force plate154; a second standoff160battached to the top surface of the bottom force plate154; a third standoff160battached to the top surface of the bottom force plate154; a first fastener162aextending through a first fastening aperture163adefined in the top force plate152and aligning with a corresponding first fastening aperture164adefined in the bottom force plate154; a second fastener162bextending through a second fastening aperture163bdefined in the top force plate152and aligning with a corresponding second fastening aperture164bdefined in the bottom force plate154; a third fastener162cextending through a third fastening aperture163cdefined in the top force plate152and aligning with a corresponding third fastening aperture164cdefined in the bottom force plate154; a top resistance band pass-through165defined in the top force plate152; and a bottom resistance band pass-through166defined in the bottom force plate154. In the illustrated example, the standoffs160a,160b, and160cdefine an axial separation distance between the top and bottom force plates152and154. As such, the standoffs160a,160b, and160care securely attached to the top surface of the bottom force plate154. The top and bottom force plates152and154are positioned with respect to one another such that the top force plate152is separated from the bottom force plate154via the standoffs160a,160b, and160c. In the illustrated example, the fasteners162a,162b, and162care threaded fasteners such as screws. It will be appreciated that other types of fasteners are possible. The top and bottom force plates152and154are positioned such that the top fastener apertures163a,163b, and163care in axial alignment with and corresponding bottom fastener apertures164a,164b, and164c. The fasteners162a,162b, and162cextend through the top fastener apertures163a,163b, and163cand164cand are threaded into the corresponding bottom fastener apertures164a,164b, and164c. As a result, a bottom surface of the top force plate152directly contacts a top portion of each standoff160a,160b, and160cto define or otherwise set the axial separation distance between the top and bottom force plates152and154. In certain embodiments, the force sensor156is attached to or otherwise mounted on the top surface of the bottom force plate154. Additionally, at least two shims168band168care attached to or otherwise mounted on the top surface of the bottom force plate154. In the illustrated example, the first standoff160a, is positioned on top of the force sensor156and securely attached to the top surface of the bottom force plate154. The second standoff160band third standoff160care positioned on top of shims168band168c, respectively. The second and third standoffs160b, and160care securely attached to the top surface of the bottom force plate154. In various embodiments, the shims168band168care configured with a thickness equal to the thickness of the force sensor156such that the standoffs160a,160b, and160cdefine a uniform separation distance between the top and bottom force plates152and154. As a result, when the top force plate152contacts each of the standoffs160a,160b, and160c, any force (e.g., downward acting force) acting on the top force plate152will be equally distributed between the standoffs160a,160b, and160c. Furthermore, the force acting on the top force plate152will be directed to a sensing area of the force sensor156via the standoffs160,160b, and160c. In certain embodiments, the first end122of the resistance band120extends through the bottom resistance band pass-through166defined in the bottom force plate154and the top resistance band pass-through165defined in the top force plate152. A plug member167is inserted into the first end122of the resistance band120. Furthermore, the plug member167has a larger diameter than a diameter of the bottom resistance band pass-through166and the top resistance band pass-through165to keep the resistance band120from slipping out of the resistance band pass-through165and166. Thus, the plug member167helps maintain a desired position of the force sensing assembly150between the first end122of the resistance band120and the handle attachment member134of the first handle130. In certain embodiments, during use of the instrumented resistance exercise device100the plug member167engages with the top surface of the top force plate152as the patient pulls the first handle130during exercise. As such, the force generated by engagement between the plug member167and the top force plate152is directed onto the sensing area or the force sensor156. The profile of this generated force is captured and/or otherwise recorded by the force sensor156of the force sensing assembly150. In certain embodiments, the force sensor156includes a small-form factor force sensor, such as a force-sensitive resistor. It will be appreciated that alternative force transducers and sensors may be used. In certain embodiments, the force sensor156further includes a guide element configured to ensure that force is evenly concentrated and/or distributed on the face of the force sensor156. In certain embodiments, the force sensor156is configured to concentrate a known portion of the force, such as ⅓ of the force onto the sensing area of the force sensor156. In certain embodiments, the small-form factor force sensor156includes a load-bearing standoff, two force plates and a guide structure to ensure that force is evenly concentrated and/or distributed on the face of the force sensor156. In another such embodiment, the force sensing assembly150includes an alternative exemplary force sensing device251; and the processing and communication module158communicatively coupled to the force sensing device251. As best illustrated inFIGS.1,8A,8B, and8C the force sensing device251includes: a fixed member252suitably connected to the handle connection member134and the first end122of the resistance band120; and a resistance measurement device254suitably connected to the first end122of the resistance band120. The resistance measurement device254is further communicatively coupled to the processing and communication module158via a connector257. In the illustrated example, the fixed member252is a rigid planar member secured to the first handle130. The fixed member252includes a pass-through (not shown) that enables the resistance band120to extend through the fixed member252. The first end122of the resistance band120is secured to the fixed member252and the first handle130via the plug member167. In certain embodiments, the resistance measurement device254is connected to and supported by the fixed member252such that the resistance measurement device254is held in place during use of the instrumented resistance exercise device100. In the illustrated example, the resistance measurement device254is configured as a strain sensor, an elongation sensor a potentiometer, or other such resistance measurement device. The resistance measurement device254includes a wiper member258suitably attached to the first end122of the resistance band120. The wiper member258moves or translates relative to the main housing portion of the resistance measurement device. Thus, when the resistance band120is stretched the wiper member258translates from a first position260to a second position262. Accordingly, the resistance measured by the resistance measurement device254changes as the wiper member258translates from the first and second positions260and262as shown by arrow Δ1. In certain embodiments, this resistance change correlates or otherwise relates to the force applied to the resistance band120of the instrumented resistance exercise device100. Certain aspects and embodiments of the force sensing assembly150disclosed herein provides particular advantages. For example, the small form factor of the force sensor156, the resistance measurement device254, the processing and communication module158, and other components of the force sensing assembly150prevent interference with normal resistance exercise protocols by maintaining nearly normal weight profiles and by allowing full range of motion of the first and second handles130and140of the instrumented resistance exercise device100. Additionally, the small form factor of the force sensor156, the resistance measurement device254, the processing and communication module158, and other components of the force sensing assembly150allow for more efficient connection to the resistance band120. Furthermore, the instrumented resistance exercise device100need not store all of the collected data on-board a local memory device coupled to the microprocessor. Rather, the force sensing assembly150is configured to wirelessly transmit data for storage on a local data-receiving device (e.g., a mobile device170) and/or network. In certain embodiments, the force sensing assembly150includes the processing and communication module158that is communicably coupled with the force sensor156and/or the resistance measurement device254via connector157and/or connector257. The force sensor156and/or the resistance measurement device254are configured to collect force data during use of the instrumented resistance exercise device100. The processing and communication module158is configured to receive the force data from the force sensor156and/or the resistance measurement device254and communicate this collected data to a mobile device170(e.g., the local data receiving device) or other such electronic device. In certain embodiments (best illustrated inFIGS.2,5,8A and8B, the processing and communication module158includes: a housing180removably connected to the handle attachment member134; a microcontroller182disposed within the housing180; a communication module184(e.g., short-range wireless communication module) disposed within the housing180and communicatively coupled to the microcontroller182; and a power source186(e.g., rechargeable battery) disposed within the housing180and connected to the microcontroller182and the communication module184. The power source186is configured to provide power to the microcontroller182and the communication module184during use of the instrumented resistance exercise device100. As such, various embodiments of the processing and communication module158further includes a power switch188that turns the power source186on and off. In certain embodiments, the housing180may be removable from the attachment member134such that the processing and communication module158enclosed in the housing180can travel with a patient and used with different instrumented resistance exercise devices100(e.g., devices having differing resistances of the resistance band120). In certain embodiments, the housing180, includes one or more ports for connecting external components. For example, the housing180may include a port for charging the power source186. Furthermore, the housing may have a port for connecting the force sensor156to the processing and communication module158. As a result, data collected by the force sensor156is transferred to the processing and communication module158. In certain embodiments, the force sensing assembly150includes a user interface (e.g., one or more push buttons, indictor lights, etc.) such that the patient or other such user of the instrumented resistance exercise device100may use to control or otherwise monitor the force sensing assembly150. In some such embodiments, the force sensing assembly150is configured to provide visual, audio, and/or tactile feedback during use of the instrumented resistance exercise device100. For example, the force sensing assembly150user interface may include LED lights for visual feedback. In certain embodiments, the processing and communication module158is configured to tune or otherwise calibrate the force sensor156and/or the resistance measurement device254. As such, the user of the instrumented resistance exercise device100may use the user interface to calibrate the force sensor156and/or the resistance measurement device254of the force sensing assembly150. In certain embodiments, the instrumented resistance exercise device100includes a common housing280, shown in dashed lines and best seen inFIG.8C, or other such case configured to house components of the force sensing assembly150. As such, the force sensing device251and the processing and communication module158are enclosed within the common housing268. In certain embodiments, the common housing280attaches securely to the handle attachment member134or other such portion of the first handle130. Furthermore, the housing280may be removable such that the force sensing assembly150enclosed in the housing can travel with a patient and used with different instrumented resistance exercise devices100(e.g., devices having differing resistances of the resistance band120). It will be appreciated that the common housing280may be similarly configured to enclose the force sensing device151and the processing and communication module158. In the illustrated example, the microcontroller182includes a processor183or other such processing device such as, but not limited to, a microprocessor, a microcontroller-based platform, an integrated circuit, one or more field programmable gate arrays (FPGAs), and/or one or more application-specific integrated circuits (ASICs). The microcontroller182further includes a memory device185configured to store data and other such information used by the microcontroller182. The memory device185may be volatile memory (e.g., RAM including non-volatile RAM, magnetic RAM, ferroelectric RAM, etc.), non-volatile memory (e.g., disk memory, FLASH memory, EPROMs, EEPROMs, memristor-based non-volatile memory, solid-state memory, etc.), unalterable memory (e.g., EPROMs), read-only memory, and/or high-capacity storage devices (e.g., hard drives, solid state drives, etc.) In various embodiments, the microcontroller182further includes an analog to digital converter (ADC) configured to convert an analog signal to a digital signal. For example, the force sensor156may output a voltage signal or other such analog signal in response to the amount of force directed to the sensing area of the force sensor156. The microcontroller182ADC converts the analog signal (e.g., voltage) to a digital signal that can be analyzed by the microcontroller182. Additionally, the processing and communication module158may transmit the digital signal another computing device for further analysis. In various embodiments, the memory device185is computer readable media on which one or more sets of instructions, such as the logic or software for operating the methods of the present disclosure, can be embedded. For example, the instructions reside completely, or at least partially, within any one or more of the memory device185, the computer readable medium, and/or within the processor183during execution of the instructions. In the illustrated example, the communication module184is configured to communicatively connect the processing and communication module158to the mobile device170(e.g., a smart watch, a smart phone, a tablet computer, a laptop computer, any other such mobile device and/or combinations therein) of the patient or user of the instrumented resistance exercise device100. Accordingly, the communication module184includes hardware and firmware to establish a wireless connection between the processing and communication module158and the mobile device170. For example, the communication module184is a wireless personal area network (WPAN) module that wirelessly communicates with the mobile device170via short-range wireless communication protocols. In various embodiments, the communication module184implements the Classic Bluetooth®, Bluetooth®, and/or Bluetooth® Low Energy (BLE) protocols. Additionally, or alternatively, the communication module184is configured to wirelessly communicate via WiFi®, WiFI®low power, Near Field Communication (NFC), Ultra-Wide Band (UWB), and/or any other short-range and/or local wireless communication protocol (e.g., IEEE 802.11 a/b/g/n/ac) that enables the communication module184to communicatively couple to the mobile device170. FIG.5illustrates one exemplary remote clinical data collection and monitoring system200which incorporates the instrumented resistance exercise device100discussed above and illustrated inFIGS.1to4B, and8A to8C. More specifically, the remote clinical monitoring system200includes: the force sensing assembly150configured to collect force and other such data during use of the instrumented resistance exercise device100; the mobile device170communicatively coupled with the force sensing assembly150; a remote data server210communicatively coupled with the mobile device170via a network220; and a remote clinician device230communicatively coupled with the mobile device170and the remote data server210via the network220. In various embodiments, the remote clinical data collection and monitoring system200is configured to collect data from a patient or other such user of the instrumented resistance exercise device100. More specifically, the force sensing assembly150is configured to measure and record the force profile of the patient performing exercises using the instrumented resistance exercise device100. The force sensing assembly150, via the processing and communication module158, is communicatively coupled or otherwise tethered to the patient's mobile device170(e.g., local data receiving device). In various embodiments, the processing and communication module158is configured to transmit the force profile collected by the force sensing device151,251to the mobile device170using BLE or other such short-range wireless communication protocol. In various embodiments, the mobile device170is configured with or otherwise includes an resistance device application or other such software associated with the instrumented resistance exercise device100. In such embodiments, the patient may activate the resistance device application on the mobile device170(e.g., smart watch, smart phone, or other such mobile device) before starting an exercise session with the instrumented resistance exercise device100. Once activated, the resistance device application initiates a tethering sequence between the mobile device170and the force sensing assembly150. The mobile device170will display a connection confirmation to the patient indicating that the mobile device170and force sensing assembly150are tethered or otherwise communicatively coupled. Furthermore, the resistance device application displays a variety of exercises for the patient to perform (e.g., elbow flexion, shoulder lift, sated row, and triceps extension) and an option for different resistance bands to be used with the instrumented resistance exercise device100(e.g., bands having different levels of resistance). The resistance device application enables the patient to select the specific exercise and resistance band used for the current exercise session. Once the patient enters the proper selections, the patient starts performing the exercise with the instrumented resistance exercise device100. The force sensing assembly150sends the collected force data to the mobile device170. In various embodiments, the resistance device application displays the data as it is received from the force sensing assembly150. Additionally, once the patient completes the exercise the resistance device application may display an exercise summary to the patient. Furthermore, when the exercise is complete, the patient can transmit the data from the mobile device170to the remote data server210or other such storage location via the network220. In various embodiments, the remote clinical data collection and monitoring system200establishes a secure data communication pathway that enables processing of data collected by the force sensing assembly150into meaningful outputs for the patient (e.g., user of instrumented resistance exercise device100) and/or clinician (e.g., physician, physical therapist, or other such health care provider). The data collected by the force sensing assembly150of the instrumented resistance exercise device100can be analyzed to provide exercise performance information to the patient and/or clinician such as, but not limited to, maximum force output, number of repetitions completed, and time to complete exercise or repetition. Additionally, the data collected by the force sensing assembly150may be analyzed to determine strength data based on the force data generated during exercise. The remote clinical data collection and monitoring system200can also provide more complex data analysis such as feedback on the shape of the force curves as they pertain to proper exercise form. In various embodiments, the collected and analyzed data can be displayed via a visual display of the mobile device170and/or the remote clinician device230(e.g., smart phone display, smart watch display, tablet computer display, laptop computer display, network terminal display, etc.). Furthermore, the collected data may be periodically offloaded to the remote data server210or other such network-based repository (e.g., secure website, secure cloud-based storage, etc.) so that the clinician can review the information asynchronously. In various embodiments, the remote data collection and monitoring system200may be configured to enable the clinician to send feedback to the patient based on the collected data analysis via the network220. For example, the clinician may transmit a message to a specific user's mobile device170about exercise performance and reminders to perform exercise can also be included via the data-receiving device. Furthermore, the remote data collection and monitoring system200may be configured to send audio and/or tactile feedback (like vibrations) to the mobile device170to provide real time feedback to the patient. In various embodiments, the remote clinical data collection and monitoring system200enables the clinician to review useful summarized metrics about patient performance on exercises over time (e.g., daily, weekly, monthly, etc.). Such review may be implemented via, for example, a secure webpage accessed over the network220. In some such embodiments, viewing features include the ability to review a single exercise session or daily/weekly/monthly exercise summaries broken down by a specific exercise or series of exercises. A clinician dashboard displayed by the remote clinician device230can allow for the review of multiple patients, as well as enable the clinician to send direct feedback to the patient's mobile device170. In various embodiments, use of the remote data collection and monitoring system200by a clinician or other health care provider and a patient involves one or more of the following: (1) a health care provider, such as a clinician (physician or physical therapist), recommending and teaching an exercise or set of exercises; (2) the patient obtaining an instrumented resistance exercise device100and a data-receiving device configured to receive resistance exercise data such as mobile device170; (3) the patient performing the resistance exercises in an unsupervised setting using the instrumented resistance exercise device100; (4) the instrumented resistance exercise device100transmitting resistance exercise data to the mobile device170; (5) the mobile device170transmitting the received data via the network220to the remote data server210; and/or (6) the patient and/or clinician reviewing the data periodically and using that information to generate and/or update healthcare plans. In various embodiments, the remote data collection and monitoring system200enables the clinician to remotely monitor progress of a patient treatment plan. Such remote monitoring capabilities enables the clinician and/or other health care provider, to: (1) review daily physical therapy activity via, for example, a wireless, Bluetooth modality; (2) evaluate the quality of the exercises performed; (3) allow individualization and tailoring of a fitness and strength training plan to better meet the patient's needs; (4) give/receive encouragement to stay on track with the patient's exercise regimen(s); and (5) encourage the patient to push themselves should treatment and/or progress goals not be met. In various embodiments the remote data collection and monitoring system200enables the clinician to monitor and tailor the patient's treatment plan while the patient performs the exercises in a remote (e.g., home-based, or otherwise unsupervised) setting. These advantages are particularly useful in rural and/or remote regions that have broadband or cellular access enabling transmission of data to healthcare settings at a distance away. In certain embodiments, the remote data collection and monitoring system200further includes one or more additional sensors configured to collect physiological and/or environmental data. For example, the mobile device170incorporated with the remote data collection and monitoring system200may include a temperature sensor, a light sensor, an optical sensor or other such sensor for measuring heart rate, blood oxygen saturation, or other such physiological information. In certain embodiments, the mobile device170or other such local data-receiving device communicatively coupled with the force sensing assembly150is configured to wirelessly receive and optionally store, resistance exercise data from the instrumented resistance exercise device100. In some such embodiments, the mobile device170is a smart watch, a smart phone, or other such smart mobile device. Furthermore, the mobile device170is a BLE-enabled device configured to send data to and receive data from the force sensing assembly150. In certain embodiments, the mobile device170is further configured to collect and/or wirelessly receive additional physiological and/or environmental data related to the patient's heath. In certain embodiments, the mobile device170is configured to transmit resistance exercise data to a remote data server210via a network220. A health care provider may access the resistance exercise data on the remote data server210via the network220and provide feedback to the patient via the network220. Such feedback may include recommended adjustments to the exercise regimen, the adjustments including but not limited to, the duration, extent and/or repetition of the patient's exercise regimen. Accordingly, the device, methods and systems described herein may be implemented over or as part of a body area health network. In some such embodiments, resistance exercise data can be combined with other information, such as other physiological data or environmental data. In some such embodiments, resistance exercise data, and, optionally, the other information, is accessible to a health care provider, for example by using wireless, real-time data communication to transmit the data to the health care provider's network. In some such embodiments, a health care provider can review resistance exercise data, and, optionally, the other information, and subsequently provide feedback remotely, for example to the patient's local data-receiving device. EXAMPLES Example 1. Bench Top Tests Weights of 0.2, 5, 10 and 15 lb were applied to the force sensing assembly150The force data collected by the force sensor156was sent to the processing and communication module158. The received data was wirelessly transmitted via BLE to the mobile device170communicatively coupled to the force sensing assembly150. The resistance device application or software executed on the mobile device170enabled real-time display of collected force data. Mean±standard deviation and variability (%) of the collected force data was measured and analyzed. TABLE 1Repeatability of SensorWeightTrialTrialTrialStdΔ(lb)123MeanDev(%)0.250.50.620.650.590.0823.15.251.641.882.081.870.2221.210.252.372.612.812.600.2215.715.2533.23.243.150.137.4 The response of the force sensor156was found to be repeatable and linear. Example 2. Lateral Raises Twenty six (26) healthy young adults performed10lateral raises using the instrumented resistance exercise device100. Force data collected by the force sensing assembly150was transmitted via BLE from the force sensing assembly150to the mobile device170. The mobile device170subsequently transmitted the data to investigators (e.g., clinicians) for further analysis. FIG.6shows an exemplary data output from the force sensing assembly150. In all cases, each repetition of the performed exercise was clearly visible. Raw data was converted to voltage (V) on the left axis and force (lb) on the right axis. Force conversion was performed using the following equation: force=5.73×voltage−3.99. The force conversion is a direct result of the linear fit of the sensor determined from the sensor repeatability shown in Table 1. Elongation data (length of the resistance band at maximum stretch) was converted to theoretical force using the resistance band linear fit conversion. The average percent difference between predicted forces from elongation and converted forces from real data is nearly 67%±79.3%. Example 3. Data Analysis Using Peak Detection Algorithm Participants performed four different exercises selected to promote muscle strengthening: elbow flexion; shoulder lift; seated rows; and triceps extension. Each exercise was carried out in succession with time for rest. Data collected by the force sensing assembly150was examined with a peak detection algorithm that determined a number of repetitions of each exercise performed.FIG.7illustrates one example set of data collected by the force sensing assembly150and analyzed by the peak detection algorithm. The algorithm defined each repetition as a peak of the relative force exerted on the resistance band120and measured by the force sensing assembly150. The algorithm used in the data analysis included: a local regression (LOESS) smoothing with a second-degree polynomial term; calculating the change in slope between each reading; and removing any peaks that did not exceed a pre-defined minimum value. The total number of repetitions performed for each exercise was determined from the sum of the number of peaks identified for that exercise. Noise from the collected data was assessed as the ratio of the LOESS smoothed to recorded value, and the signal-to-noise ratio (SNR) was subsequently assessed. In view of this disclosure it is noted that the methods and apparatus can be implemented in keeping with the present teachings. Further, the various components, materials, structures and parameters are included by way of illustration and example only and not in any limiting sense. In view of this disclosure, the present teachings can be implemented in other applications and components, materials, structures and equipment to implement these applications can be determined, while remaining within the scope of the appended claims. In this application, the use of the disjunctive is intended to include the conjunctive. The use of definite or indefinite articles is not intended to indicate cardinality. In particular, a reference to “the” object or “a” and “an” object is intended to denote also one of a possible plurality of such objects. Further, the conjunction “or” may be used to convey features that are simultaneously present instead of mutually exclusive alternatives. In other words, the conjunction “or” should be understood to include “and/or.” The terms “includes,” “including,” and “include” are inclusive and have the same scope as “comprises,” “comprising,” and “comprise” respectively. Unless otherwise indicated, the terms “first”, “second”, “third”, and other ordinal numbers are used herein to distinguish different elements of the present apparatus and methods, and are not intended to supply a numerical limit. For instance, reference to first and second openings should not be interpreted to mean that the apparatus only has two openings. An apparatus having first and second elements can also include a third, a fourth, a fifth, and so on, unless otherwise indicated. The above-described embodiments, and particularly any “preferred” embodiments, are possible examples of implementations and merely set forth for a clear understanding of the principles of the invention. Many variations and modifications may be made to the above-described embodiment(s) without substantially departing from the spirit and principles of the techniques described herein. All modifications are intended to be included herein within the scope of this disclosure and protected by the following claims.
42,182
11857838
DETAILED DESCRIPTION OF ILLUSTRATED EMBODIMENTS In order to explain in detail the technical content, construction features, the purpose and effect achieved by the present invention, the following combined with the implementation and the attached drawings are described in detail. Referring toFIG.1, the present invention provides a method for assessing exercise fatigue, and the method include the following steps: S1, collecting a user's real-time exercise heart rates, and calculating a chronic training load (CTL) for recent N days of the user and an acute training load (ATL) for recent M days based on the exercise heart rates and an exercise load computation model. Preferably, N is much greater than M; more preferably, the value of N is 42, and the value of M is 7, for example. If the currently recorded exercise time is less than 42 days or 7 days, an average of the exercise load of the longest exercise time currently recorded is calculated. S2, obtaining a training stress balance (TSB) by calculating a difference between the CTL and the ATL, namely, TSB=CTL−ATL. In such a way, the exercise load that the user's current body functions can bear may be accurately assessed. S3, defining different fatigue levels, including Level A, Level B, Level C, . . . and so on, for example. In this embodiment, the fatigue levels may be energetic, appropriate, greater, and excessive. S4, based on value of the TSB, determining the fatigue level of the user. That is, the larger the value of the TSB is, the higher the fatigue level is (namely, the better the current physical condition is, and the lower the fatigue degree is); instead, the smaller value of the TSB is, the lower the fatigue level is (namely, the worse the current physical condition is, the higher the fatigue degree is). As known, a user's physical function is affected by the intensity of long-term exercise, the intensity of long-term exercise is greater, the exercise intensity that can be withstood during the current exercise will be greater. Therefore, the user's physical function condition can be characterized by the TSB that is the difference between the CTL and the ATL. The fatigue level is determined based on the ‘T’SB in the above embodiment, so that the determined fatigue level is closer to the user's current physical function condition. In addition, in the calculation process of the TSB, the CTL and the ATL are obtained through a preset exercise load computation model based on o real-time exercise heart rate Therefore, TSB is a completely objective parameter, so that the determination of the fatigue level is not affected by subjective factors, thereby effectively improving the objectivity and accuracy of exercise fatigue assessment. In a preferable embodiment, multiple intervals are divided with using CTL as the standard parameter. Specifically, if the value of the TSB is with the interval [0.1CTL, +∞], the fatigue level is determined to be energetic; if the value of the TSB is within the interval [−0.4CTL, 0.1CTL), the fatigue level is determined to be appropriate; if the value of the TSB is within the interval [−0.7CTL, −0.4CTL), the fatigue level is determined to be greater; and if the value of the TSB is within the interval [−∞, −0.7CTL), the fatigue level is determined to be excessive. In the present invention, the exercise load computation model may apply a common exercise load computation model or other models. Preferably, the exercise load computation model is TR=Σ1TB*C*TK, wherein TR denotes an exercise load. T denotes the user's continuous exercise time for each time, B=(exercise heart rate−resting heart rate)/(maximum heart rate−resting heart rate), C=P1*eP2*B, P1 is a constant between 0.1 and 0.5, P2 is a constant between 2.5 and 7, and TKis a temperature influence coefficient obtained by querying a temperature influence coefficient table (as shown in Table 1) recording multiple influence coefficients of different temperatures on exercise load. TABLE 1Temperature (° C.)Temperature influence coefficient251261.1271.2281.3291.4301.5311.6321.7331.8341.9 The resting heart rate is the heart rate value of a user under awake and quiet state, and the maximum heart rate is the heart rate value of a user when the user reaches an extreme exercise state. The resting heart rate and the maximum heart rate can be detected by a portable detecting device or preset by the user manually when the resting heart rate and the maximum heart rate are known to the user, alternatively can be calculated based on the user's age through the formula HRmax=208−0.7*a, wherein HRmaxdenotes the maximum heart rate, and a is the age input by the user. If the current continuous exercise time is one hour (3600 seconds), the exercise load in this one hour is Σ13600B*C*TK, that is because the exercise heart rate is collected once per second. Furthermore, the exercise load is also affected by altitude. A user may consume more exercise loads in high altitude areas. In view of it, the exercise load computation model further includes an altitude parameter, namely the exercise load computation model is TR=Σ1TB*C*TK*GK, wherein GKis an altitude influence coefficient obtained by querying an altitude influence coefficient table (as shown in Table 2) recording multiple influence coefficients of different altitudes on the exercise load. TABLE 2Altitude (m)Altitude influence coefficientbelow 1500116001.117001.218001.319001.420001.521001.622001.723001.824001.9 Additionally, the exercise load may be also affected by different exercise items, in a preferable embodiment, the exercise load computation model further includes an exercise item parameter, namely the exercise load computation model is: TR=Σ1TB*C*TK*GK*XK, where XKis an exercise item influence coefficient obtained by querying an exercise item influence coefficient table (as shown in Table 3) recording multiple influence coefficients of different exercise items on the exercise load. TABLE 3Exercise itemExercise item influence coefficientRunning1Riding0.6Swimming1.5Boxing3.5 In conclusion, the method for assessing exercise fatigue according to the present invention collects the exercise heart rates of the user, the current temperature, attitude and exercise item, then obtains a temperature influence coefficient TK, an altitude influence coefficient GK, and an exercise item influence coefficient XKby querying the corresponding data table, then calculates the user's exercise load during the exercise time period through the exercise load computation model TR=Σ1TB*C*TK*GK*XKbased on the resting heart rate, maximum heart rate, exercise heart rate, coefficients TK, GK, and XK; and then cumulative exercise loads for each day are saved in units of days; then, CTL for the recent 42 days and ATL for the recent 7 days are calculated by using moving average algorithm, and a TSB is obtained by calculating a difference between the average CTL and the average ATL; finally a fatigue level is determined by judging which interval the current TSB is located. Accordingly, the present invention further provides a device for assessing exercise fatigue, as shown inFIG.2. The device includes a portable base (such as a sports watch or a wristband) on which a heart rate sensor, a computation module and a matching module are configured. Specifically, the heart rate sensor is configured to collect a user's real-time exercise heart rates. The computation module is con figured to calculate a CTL for recent N days and an ATL for recent M (N is much greater than M, preferably) days based on the exercise heart rates and a preset exercise load computation model, and obtain a TSB by calculating a difference between the CTL and the ATL, namely, TSB=CTL−ATL. The matching module is configured to determine a fatigue level based on the TSB. Different fatigue levels represent different fatigue degrees. The working principle and detailed working process of the device for assessing exercise fatigue in this embodiment will not be repeated here, please refer to the above method for assessing exercise fatigue for details. While the invention has been described in connection with what are presently considered to be the most practical and preferred embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangement included within the spirit and scope of the invention.
8,498
11857839
The drawings accompanying and forming part of this specification are included to depict certain aspects of the disclosure. A clearer conception of the disclosure, and of the components and operation of systems provided with the disclosure, will become more readily apparent by referring to the exemplary, and therefore non-limiting, embodiments illustrated in the drawings, wherein like reference numbers (if they occur in more than one view) designate the same elements. The disclosure may be better understood by reference to one or more of these drawings in combination with the description presented herein. DESCRIPTION The present disclosure relates to retrofitting conventional exercise machines with smart functions. Preferred embodiments of the present disclosure will be described hereinafter with reference to the attached drawings. FIG.1is a block diagram of a system for retrofitting conventional exercise machines according to an embodiment of the present disclosure. The retrofitting system includes a sensor module110, a control module140, a cloud-based application server160, local user interface devices172and remote user terminals178. The sensor module110is to be mounted on a moving part of a designated exercise machine by a specially designed fixture, such as a clip or a magnet, to detect motions of the machine part. As the sensor module110is removable, the sensor module110can also be secured to a user's body part by an exemplary strap to detect body movements directly. In an embodiment, the senor module110includes an accelerometer131, a gyroscope133, a magnetometer135, a processing unit112and a wireless transmission module122. The accelerometer131, the gyroscope133, and the magnetometer135together can detect the exercise machine part's linear distance and angle of movement. The processing unit112collects data sampled by the accelerometer131, the gyroscope133, and the magnetometer135and transforms the sampled raw data into acceleration, angular acceleration, magnetic value and quaternion data for being transmitted by the wireless transmission module122. In an embodiment, the wireless transmission module122is implemented with Bluetooth technology and the data transmission is in packet form. However, other wireless transmission technologies, such as infrared communication, broadcast radio, microwave communication, mobile communication and Wi-Fi communication, may also be used to implement the wireless transmission module122. When an exercise machine has multiple moving parts, multiple sensor module110may be used, so that each moving part has its own sensor module110. In addition, a typical fitness facilities may have multiple exercise machines, and every machine may have its own set of sensor modules110. In order to identify each sensor module110, an identification number may be assigned to it. The identification number may be set by the factory and can be read out by the control module140, or can be dynamically written into its local storage by the control module140, which is placed in a vicinity of multiple sensor modules110and wirelessly communicates therewith through Bluetooth technology. Referring again toFIG.1, the control module140includes a processing unit145, a Bluetooth module152, and an exemplary Wi-Fi module155. The Bluetooth module152and the wireless transmission module122of the sensor module110establish a wireless communication connection for receiving data from and transmitting commands to the sensor module110. The Wi-Fi module155is used to wirelessly communicates the control module140with the cloud-based application server160through the Internet. In an embodiment, a single wireless communication module can be used in place of both the Bluetooth module152and the Wi-Fi module155. In such case, the control module140communicates with both the sensor module110and the cloud-based application server160using Wi-Fi technology. The control module140is exemplarily placed on a ground near yet separated from the exercise machine or placed on the exercise machine. A control module140may be associated with one exercise machine or multiple exercise machines depending on a number of sensor modules110the exercise machines have. In a large fitness facility where multiple exercise machines may be scattered in different rooms, multiple control modules140may be used. In this case, a local computer (not shown) may be used to control and communicate with the multiple control modules140through either wireless Bluetooth or ethernet cables. The local computer serves as a gateway to communicate with the cloud-based application server160. The processing unit145receives the acceleration, angular acceleration, magnetic value and quaternion data from the sensor module110, and calculates an angle of rotation and angular acceleration of a corresponding machine part based on the received data. In addition, the processing unit145also detects a speed range of the corresponding machine part based on the received data. The angle of rotation, the angular acceleration and the speed range are then transmitted to the cloud-based application server160through the exemplary Wi-Fi module155. As shown inFIG.1, the processing unit145is also coupled to local user interface devices172, such as a display with touch sensing inputs. A user can sign up or log into his or her account through the local user interface devices172. For such purposes, the local user interface devices172may have a touch sensing screen as well as a QR code scanner, a card reader and/or a wearable device sensor. Once logged in, the local user interface devices172allows the user to instantly monitor his or her workout characteristics, such as range of motion and cadence, etc. These characteristics may be compared with predetermined targets. The local user interface devices172also allows the user to set the targets or acquiring training programs from the cloud-base application server160. As such, each exercise machine may associate with a local user interface device172placed nearby for individual use. The local user interface device172may also include a large display and associate with multiple exercise machines to facilitate group workouts. Referring again toFIG.1, the cloud-based application server160serves as a management platform for the smart exercise machine system according to embodiments of the present disclosure. The management platform provides a website that is hosted by a cloud service providers, such as Amazon Web Services and Microsoft Azure Cloud Provider, etc. The website can be accessed by the user terminals178from anywhere with an Internet connection. The website offers numerous training and management operations, such as real-time coaching, series of classes, training information dashboard, trainee's training records, training organizing and planning, administrator's access, facility management, machine management, and agent management, etc. As an example, the real-time training guidance can stream video and/or audio content to the local user interface devices172, so that the user can follow a selected workout routine. At the same time the corresponding sensor module110can monitor the user's performance in real-time for comparing with the selected workout routine. Then the user interface devices172displays progresses by the user, such as how many repetitions the user has performed and how many to go. The cloud-based application server160may also host user account records accessible through the website. The user account records may store user's contact information, training logs, physical fitness test records, personal profiles, user sign-in information and user's biometric data. In addition, the cloud-based application server160may provide a web portal for facility management personnel, so that they can monitor the machine usages and coaching activities. In embodiments, the cloud-based application server160may be linked to users' social media accounts, such as LINE accounts, so that their fitness activities can be shared through social media; family members can monitor their workouts in real time; and their training and coaching records can be easily accessed. By linking to social media accounts, the users can also talk to or share video with each other during exercise. In addition, family members can access a user's physical fitness record and biometric data through the social media account. FIG.2illustrates an angle calculation method performed by the processing unit145in the control module140. The angle estimation algorithm202uses the acceleration, angular acceleration, magnetic value and quaternion data from the sensor module110to calculate the turning angle of a corresponding machine part. From the calculated turning angle, an angle position of the beginning rotation211, an angle position of the end rotation214and a rotation angle217are obtained. The angle position of the beginning rotation211and the angle position of the end rotation can be calculated from the quaternion data. If the axis of rotation around the quaternion is a normalized vector (ax, ay, az), the rotation angle is θ, then the (w, x, y, z) component of the quaternion is: (ax·sin(θ/2), ay·sin(θ/2), az·sin(θ/2), cos(θ/2)). The rotation angle can be calculated from the angular acceleration data by an equation: Angle=V*T, where V is a current angular acceleration value, and T is a sample rate of the sensor module110. As an example, T is set at 200 ms. In an embodiment, a nine-axis sensor module may be used to detect three-axis acceleration, three-axis angular acceleration, three-axis magnetic value and quaternion data. The quaternion data is derived from the acceleration, the angular acceleration, and the magnetic value data. Below Table illustrates relationships of these measurement data. TABLE ITypeDataUnit3-axisX-axis angularRadian/sec2angularaccelerationaccelerationY-axis angularRadian/sec2accelerationZ-axis angularRadian/sec2accelerationQuaterniona + bi + cj + dki{circumflex over ( )}{2} = j{circumflex over ( )}{2}= k{circumflex over ( )}{2} = i * j * k = −1 From the angle calculation result, completeness of a workout repetition can be derived. The completeness is defined as a ratio or percentage of the actual rotation angle to a predetermined maximum rotation angle for a particular machine part. The predetermined maximum rotation angle is stored in either the control module140or the cloud-based application server160. FIG.3is a flowchart illustrating a process of detecting completeness of a workout repetition according to an embodiment of the present disclosure. The process begins with block310where current angle change is detected. In block320, the detection result is evaluated. If there is an angle change, the process enters block330where the current angle change is accumulated. In subsequent block340, a percentage of completion is calculated. In block350, the current angle is outputted to either the local user interface devices172or the cloud-based application server160, or both—in real time. Back to block320, if there is no angle change, the process starts a timer in block360. If the timer expires after a predetermined time, for instance, 1 second, there is still no angle change being detected, the process enters block370where a completion angle is calculated and outputted to either the local user interface devices172or the cloud-based application server160in real time, or both—in real time. FIGS.4A-4Cillustrates various locations the sensor modules110are attached to different exercise machines. Referring toFIG.4A, for a simple arm press machine on which both arms move synchronously, only one sensor module110is needed to be mounted at location A which reflects angular changes of a handle. Referring toFIG.4B, for a chest expansion machine, as the left and right arm may move asynchronously, two sensor modules110are needed with one mounted at location A on a left arm part and the other mounted at location B on a right arm part. Referring toFIG.4C, a combinational machine has multiple moving parts, multiple sensor modules110are mounted at locations A, B, C, D, E, and F, each corresponds to a moving part. During a setup process, each sensor module110is identified by an identification number or code and associated with the exercise machine the sensor module110is mounted to. FIG.5is a flowchart illustrating a process for retrofitting an exercise machine with the smart system of the present disclosure. The retrofitting process begins with block510where a purchaser first creates an account in the cloud-based application server160. After signing up, in block520, the purchaser obtains a name and identification code for a particular exercise machine that is intended to be retrofitted with the smart functions. In block530, a control module140is designated to the exercise machine. The control module140's universal unique identifier (UUID) is then provided to the cloud-based application server160. In block540, sensor modules110are installed on the exercise machine. Identification number or code of each of the installed sensor modules110are obtained and provided to the cloud-based application server160. In block550, range of motions of the installed sensor modules are calibrated. The calibration includes detecting and recording a starting angle and an end angle. In block560, information relates to the retrofitted exercise machine, such as the facility identification, the machine identification, the control module140's UUID and the sensor module110's starting angle and end angle, are stored in the cloud-base application server160. FIG.6is a flowchart illustrating a process of using a retrofitted smart exercise machine. To begin using the smart exercise machine, a user uses his or her smartphone to scan a quick response (QR) code or universal product code (UPC) affixed on the exercise machine in block610. The log-in process also include transmitting the user identity information along with the exercise machine's identification code to the cloud-based application server160. If the user is not a registered customer, and the cloud-based application server160does not recognize the user, then the smart functions will not be activated, so the user can only use the exercise machine as a conventional one. If the user is a registered customer, then the cloud-based application server160transmits a start token to a control module140associated with the exercise machine to start detecting and transmitting training data in block620. Upon request, the cloud-based application server160also transmits a prestored training program for the user to the control module140in block630. In block640, the control module140detects angle changes as a result of the user's workout by the sensor module110. In block650, a timer tracks time durations of angle changes. If there is no angle change for a predetermined time, for instance 10 minutes, the control module140transmits an idle token to the cloud-based application server160in block665. Upon receiving the idle token, the cloud-based application server160returns a stop token to the control module140to stop further detecting training data in block690. On the other hand, if the angle changes during the predetermined time duration, meaning that the user is still working out on the exercise machine, the control module140compares the detected training data with the training program and provide real-time feedback to the user on the local user interface devices172in block660. The control module140also uploads the training data to the cloud-based application server160in an account associated with the user in block670. In block680, the control module checks if the training program is completed? If completed, the training session ends and cloud-based application server160performs block690. Otherwise, the training process returns to block660. FIG.7is a flowchart illustrating a calibration process for a retrofitted exercise machine. The calibration process begins with selecting an exercise machine in block710. Then select a sensor module110that is attached to a moving part of the exercise machine for calibration in block720. In block730, the moving part is moved to a beginning position and an angle is detected. In block740, the moving part is moved to an end position and an angle is detected again. In block750, the operator is prompted to select where to store the calibration data? If “local” is selected, the beginning and end angle are then stored in the local control module140. If “cloud” is selected instead, the beginning and end angle are then stored in the cloud-based application server160. In another embodiment, the beginning and end angle are stored in both the local control module140and the cloud-based application server160. In yet another embodiment, the beginning and end angle are first stored in the local control module140, and then optionally stored in the cloud-based application server160. FIG.8is a flowchart illustrating a process of detecting angles by the control module140. In block810, the control module140inspects packets transmitted from the sensor module110. In block820, if no sensor packet is received, the control module140keep inspecting the incoming packets in block810; if sensor packets are received, the control module140parse the received packets in block830, and obtain quaternion data (w, x, y, z) from the sensor packets in block840. In block850, the control module calculates an angle offset. In block860, the calculated angle offset is outputted to the cloud-based application server160or the local user interface devices172or both. FIG.9is a flowchart illustrating a process of determining a speed range of each motor speed level according to embodiments of the present disclosure. In block910, an operator selects an exercise machine to be calibrated. In block920, a motor speed level is selected. In block930, the operator adjusts a moving part of the exercise machine to its beginning angle. In block940, the control module140detects angular acceleration of the moving part. In block950, the operator adjusts the moving part to its end angle. In block960, after the calibration for one speed level is completed, the process goes back to block920to perform calibration for another speed level until all the speed levels are calibrated. In block970, the control module estimates a speed range of each motor speed level. In block980, the estimated result is outputted to the local user interface devices172. If selected, the estimated result is further outputted to the cloud-based application server160. Table II shows an exemplary result of motor speed level vs speed range. In this case, there are six motor speed level corresponding to six speed ranges each with a maximum speed and a minimum speed TABLE IIMotor speed levelMinimum speedMaximum speedLevel 1Lv1_minLv1_maxLevel 2Lv2_minLv2_maxLevel 3Lv3_minLv3_maxLevel 4Lv4_minLv4_maxLevel 5Lv5_minLv5_maxLevel 6Lv6_minLv6_max FIG.10is a flowchart illustrating a process of detecting motor speed levels during a workout session according embodiments of the present disclosure. In block1010, a user starts a workout session on an exercise machine. In block1020, the control module140receives angular acceleration data from the sensor module110. In block, the control module140detects angular acceleration changes and calculates a sum for a predetermined time duration, such as 1 second. In block1040, if the sum is larger than n degree, where n is an empirical threshold value predetermined based on a sensibility of smart exercise machine, the process enters block1043, where rotation is detected. In block1047, angular acceleration is recorded and then return back to block1030. If the sum is not larger than n degree, the process enters block1050, i.e., when the exercise machine has not started operation, the process returns to block1030; when the operation has started, the sensor module110detects motor speed in block1060. When a stop is detected, the current motor rotation speed is calculated through a detecting motor speed algorithm. In block1070, the control module140determines an average angular acceleration belonging to a speed range. In block1080, the control module140outputs a motor speed level corresponding to the speed range to the cloud-based application server160or the local user interface devices172or both. For an exercise machine used for rehabilitation, the detected angular acceleration during a rotation has two situations. In a first situation, the angular acceleration is caused by both a motor and a user's force. In a second situation, the angular acceleration is only caused by the motor. In order to correctly detect actual motor speed, the second situation has to be extracted from the first situation. In an embodiment, the actual motor speed is detected using following method. If δangular acceleration within a second>mEquation (1) where, δ stands for standard deviation, and in is an empirical threshold value adjustable based on the sensitivity of the exercise machine, and when a start of rotation is detected, angular accelerations that satisfies Equation (1) at two time spots, t1and t2, where a last measurement of the angular acceleration before the rotation stops takes place at t2, are detected and recorded. Then Motor⁢speed=∑t⁢1t⁢2Accumulative⁢angular⁢accelerations(t⁢2+t⁢1+1)Equation⁢(2) FIG.11is a measurement plot of angular acceleration vs. time. The measurement is performed on a retrofitted smart exercise machine having two independent sensor modules110. Up to time T1, the angular acceleration measurements fluctuates dramatically, i.e., sequential measurements differ more than a predetermined threshold. This indicates that the exercise machine is in situation (1) where the angular acceleration is caused by both a motor and a user's force. For motor speed level detection, situation (1) is ignored. Between time T1and T2, the angular acceleration measurement become smooth. This period is viewed as in situation (2) where the angular acceleration is only caused by the motor, and correct motor speed level can be detected. Equation (1) can be used to separate situation (2) from situation (1). FIG.12is a block diagram illustrated a retrofitted smart exercise machine system with two control modules140A and140B separately controlling their respective exercise machines1212and1223. An exemplary sensor module110A is removably attached to the exercise machine1212, and transmits sensor data to the control module140A via a Bluetooth low energy (BLE) connection. The control module140A calculates the sensor data into desired measurement data, such as angle values, and transmits the measurement data to the cloud-based application server160via a Wi-Fi connection. Based on the measurement data and prestored training programs, the cloud-based application server160returned training guidance information back to the control module140A, which in turn displays the information on a local user interface device172A. Similarly, an exemplary sensor module110B is removably attached to the exercise machine1223, and transmits sensor data to the control module140B via a Bluetooth low energy (BLE) connection. The control module140B calculates the sensor data into desired measurement data, such as angle values, and transmits the measurement data to the cloud-based application server160via a Wi-Fi connection. Based on the measurement data and prestored training programs, the cloud-based application server160returned training guidance information back to the control module140B, which in turn displays the information on a local user interface device172B. FIG.13is a block diagram illustrating a retrofitted smart exercise machine system with two exercise machines1212and1223sharing one big screen display. Different from the system shown inFIG.12, the two exercise machines1212and1223ofFIG.13do not have their own local user interface devices, instead they share a big screen local user interface device172C which is controlled by a control module140C. The control module140C receives training guidance information for users of both exercise machines1212and1223from the cloud-based application server160. FIG.14is a block diagram illustrating a retrofitted smart exercise machine system with two exercise machines1212and1223sharing one control module140A. Different from the system shown inFIG.13, the two exercise machines1212and1223ofFIG.14are placed near each other thus share a common control module140A. FIGS.15A and15Bare flowcharts illustrating data flow during a user's exercise session on the retrofitted smart exercise machine system of the present disclosure. Referring toFIG.15A, in block1502, when a user chooses machine A for workout, machine A transmits a signal. In block1505, the control module140inquires if machine A that transmits the signal is within its range? If not, the control module140does not receive data in block1508. If yes, the control module inquires if machine A is associated in block1510. In block1520, when machine A is associated with the control module140, the control module140acquires axial information from machine A. In block1530, the axial information is used to calculate angle using equation A which is associated with machine A. In block1540, the calculated angle is stored, and in block1550, transmitted to the cloud-based application server shown inFIG.15B. In block1553, the transmission is inspected. If successful, the angle information is deleted in block1556; if not successful, the control module140goes back to block1540and perform transmission again in block1550. Referring again toFIG.15A, if, in block1510, machine A is found to be not associated with the control module140, machine B is invoked in block1513. In block1524, the control module140acquires axial information from machine B. In block1535, the axial information is used to calculate angle using equation B which is associated with machine B. Because machine B may be different from machine A, equation B may be different from equation A. Then the calculated angle of machine B is stored in block1540and transmitted to the cloud-based application server160shown inFIG.15B. Referring toFIG.15B, after receiving the angle information from the control module140ofFIG.15Ain block1560, the cloud-based application server160recognizes the control module in block1562. In block1570, when the user choose machine A for workout, he or she uses a smartphone to scan a QR code affixed to machine A. Then in block1572, the user information is transmitted to the cloud-based application server160. If the user is a registered customer in block1565, the user's training program is received in block1580. The user's training result and progress from the angle information is calculated in block1585, and stored in block1590. The user's training result and progress may also be provided for display in a local user interface device172(not shown inFIG.15B). Referring again toFIG.15B, the cloud-based application server160recognizes the user in block1574, and recognizes the exercise machine in block1576. In block1578, the prestored user's training information is retrieved for use in block1580. FIGS.16A-16Dillustrate exemplary training results vs. training guidance. Referring toFIG.16A, a horizontal axis represents measured angle in degrees executed by a user; and a vertical axis represents a recorded time in seconds. The user starts moving at 1 second and reaches 76% of a total range of motion, and then returns to starting point at 5 second. As an example, a training guidance sets minimum angle at 10% and maximum angle at 40% of the total range of motion. Therefore,FIG.16Ashows that the user have completed a repetition. Referring toFIG.16B, even though the user hesitates and drops the moving angle at around 4 second, he or she eventually reaches above the 40% of the total range of motion and returns to the starting point at 10 second. This case is still regarded as a repetition completed. Referring toFIG.16C, the user never reaches 40% of the total range of motion, therefore, this case is regarded as a repetition not completed. Referring toFIG.16D, even though the user reaches above the 40% of the total range of motion, he or she does not return to below 10% of the total range of motion. This case is also regarded as a repetition not completed. Although the disclosure is illustrated and described herein as embodied in one or more specific examples, it is nevertheless not intended to be limited to the details shown, since various modifications and structural changes may be made therein without departing from the spirit of the disclosure and within the scope and range of equivalents of the claims. Accordingly, it is appropriate that the appended claims be construed broadly and, in a manner, consistent with the scope of the disclosure, as set forth in the following claims.
28,951
11857840
DETAILED DESCRIPTION Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following description of the embodiments, it will be understood that, when each element is referred to as being “on” or “under” another element, it can be “directly” on or under another element or can be “indirectly” formed such that an intervening element is also present. In addition, when an element is referred to as being “on” or “under,” “under the element” as well as “on the element” may be included based on the element. FIGS.1A and1Bare views showing a momentum measurement device according to an embodiment. The momentum measurement device1000according to this embodiment may include a measurement sensor100and a cover200.FIG.1Ais a plan view of the momentum measurement device1000, andFIG.1Bis a perspective view of the momentum measurement device1000. The measurement sensor100may be fixed to a kettlebell to measure the movement trajectory of the kettlebell when a user exercises with the kettlebell. The cover200may fix the measurement sensor100to the kettlebell. FIG.2is a view showing the state in which the momentum measurement device according to the embodiment is fixed to exercise equipment. The measurement sensor100is fixed to a kettlebell2000, and the kettlebell2000is shown as an example of the exercise equipment. However, the disclosure is not limited thereto. The momentum measurement device according to the embodiment may be attached to another kind of exercise equipment so as to measure the movement trajectory of the exercise equipment in order to measure momentum of the user. FIGS.3A to3Care views showing an embodiment of exercise with the exercise equipment ofFIG.2. As shown inFIGS.3A and3B, the user moves the kettlebell2000, to which the momentum measurement device1000is fixed, front to back. The motion in which the user moves the kettlebell2000front to back may be called “swing.” As shown inFIG.3C, the user raises the kettlebell2000, to which the momentum measurement device1000is fixed, to the shoulder height of the user. This motion may be called “up.” In addition, a combination of motions shown inFIGS.3A to3Cmay be called “swing” or “swing up.” Although not shown, the user may raise the exercise equipment, such as the kettlebell2000, above the head of the user from the shoulder height of the user shown inFIG.3C. This motion may be called “jerk.” Hereinafter, the construction of the momentum measurement device, the construction of a momentum measurement system including the same, and a momentum measurement method using the same will be described in detail with reference toFIGS.4to8. FIG.4is a view showing a portion of the measurement sensor of the momentum measurement device according to the embodiment. FIG.4shows the internal construction of the measurement sensor100of the momentum measurement device1000shown inFIG.1Aand other figures. Specifically, the measurement sensor100may include a printed circuit board110and an acceleration sensor120, a gyroscope sensor130, a communication element140, and a processor160connected to the printed circuit board110. The measurement sensor100may further include a switch170and a balun filter150. The printed circuit board110may be a board to which the acceleration sensor120and various kinds of resistance elements not shown inFIG.4are electrically connected. In consideration of the fact that the kettlebell2000, to which the momentum measurement device1000is fixed, is spherical, the printed circuit board110may be a flexible printed circuit board. The acceleration sensor120is a sensor configured to measure the magnitude of acceleration or impact of a moving object. In this embodiment, the acceleration sensor120may measure the magnitude of acceleration or impact of the kettlebell2000. The gyroscope sensor130is a sensor using the principle of angular momentum. A shaft of a wheel of the gyroscope sensor130is connected to a triple ring such that the gyroscope sensor130is rotatable in any direction. In this embodiment, the gyroscope sensor130is used to measure the movement direction of the kettlebell2000. That is, since the acceleration sensor120accurately senses straight motion but not circular motion, the gyroscope sensor130is used with the acceleration sensor120in order to accurately measure the movement trajectory of the kettlebell2000. The communication element140may perform wireless communication between a wireless terminal3000of the momentum measurement system, a description of which will follow, and the momentum measurement device1000. Wireless communication between the momentum measurement device1000and the wireless terminal3000may be performed based on Bluetooth, Wi-Fi, radio communication, or ZigBee. However, the disclosure is not limited thereto. The processor160may be an element configured to control overall operation of the measurement sensor100. The balun filter150, which is an element configured to convert a signal of the communication element140, may be included in the communication element140. The switch170is an element configured to turn on/off the momentum measurement device1000according to this embodiment. For example, when the momentum measurement device1000is separated from the kettlebell or the user stops exercising, the user may touch the switch170to turn off the momentum measurement device1000. FIG.5is a view showing the construction of a momentum measurement system according to an embodiment. The momentum measurement system according to this embodiment may include a momentum measurement device1000and a wireless terminal3000. The momentum measurement device1000may be the momentum measurement device according to the above embodiment described above, and may include a measurement sensor100and a cover200. The wireless terminal3000may receive a signal from the momentum measurement device1000and may analyze the signal in order to analyze the movement trajectory of the kettlebell. For example, the wireless terminal3000may be a smartphone, a 3G video phone, a kiosk, an Internet video telephone, a PC video softphone, a PMP, or a PDA. However, the disclosure is not limited thereto. In addition, the wireless terminal3000may provide exemplary motions of the kettlebell2000to the user or may provide comparison between the exemplary motions of the kettlebell2000and actual motions of the kettlebell2000performed by the user. The wireless terminal3000may be paired with the momentum measurement device1000. Here, “pairing” means that signals are transmitted and received through the communication element140and a communication unit3100. In the case in which a wireless frequency range is set for a wireless terminal3000, the wireless terminal3000may be paired with another momentum measurement device1000so as to be used. The wireless terminal3000may include a communication unit3100, a calculation unit3200, a determination unit3300, and an output unit3400. The communication unit3100may receive data measured by the acceleration sensor120and the gyroscope sensor130in the measurement sensor100from motion of the exercise equipment having the momentum measurement device1000attached thereto, i.e. the kettlebell2000, via the communication element140. The calculation unit3200may calculate movement of the measurement sensor100from the data measured by the acceleration sensor120and the gyroscope sensor130. The calculation unit3200may be a processor. The determination unit3300may compare the calculated movement of the measurement sensor with predetermined motion beat information. The predetermined motion beat information may be, for example, 40 beats per minute (BPM). That is, a program that the user follows to exercise with the kettlebell2000, i.e. motion beat information, is displayed on the wireless terminal3000, the movement trajectory of the kettlebell2000when the user actually exercises using the kettlebell2000is acquired, and the motion beat information is compared with the actual exercise result of the user to determine how similar the actual exercise of the user is to the exercise program of the user. The exercise program may be swing, jerk, double cycle, or a combination of swing, jerk, and double cycle. The swing and the jerk have been described above, and the double cycle may be repetition of the swing and/or the jerk. The output unit3400of the wireless terminal3000may display the determination result of the determination unit3300. The output unit3400may be an audio output unit, such as a speaker, or a video output unit, such as a screen of the wireless terminal3000. FIGS.6A and6Bare views showing a process of fixing the momentum measurement device according to the embodiment to the exercise equipment. Referring toFIG.1Aand other figures, the momentum measurement device according to the embodiment may include the measurement sensor100and the cover200. The measurement sensor100may be a sensor configured such that the acceleration sensor120and the gyroscope sensor130are electrically connected to the printed circuit board110. The cover200may fix the measurement sensor100to the kettlebell2000. The cover200may have a shape that surrounds a first surface and a side surface of the measurement sensor100. The first surface of the measurement sensor100may be an upper surface, and the lower surface of the measurement sensor may face the kettlebell in contact therewith. FIG.6Ashows the side surface of the cover200. The cover200may include a cover body210and fixing portions protruding from at least three regions of the edge of the cover body210. First and second fixing portions220and230are shown. The cover body210and the first and second fixing portions220and230may be made of a flexible material, such as silicone. However, the disclosure is not limited thereto. In addition, the cover body210and the first and second fixing portions220and230may be made of a transparent material. InFIG.6A, a first surface a1, which is the upper surface of the cover body210, a second surface a2, which is the lower surface of the cover body210, and first and second side surfaces a31and a32are shown. The first fixing portion220, which is adjacent to the first side surface a31of the cover body210, may have a first surface b11, a second surface b12, which is the lower surface thereof, and first and second side surfaces b13and b14. The second fixing portion230, which is adjacent to the second side surface a32of the cover body210, may have a first surface b21, a second surface b22, which is the lower surface thereof, and first and second side surfaces b23and b24. InFIG.6A, the first surfaces b11and b21of the first and second fixing portions220and230may be connected to the first surface a1of the cover body210, and the first side surfaces b13and b23of the first and second fixing portions220and230may separably contact the first and second side surfaces a31and a32of the cover body210. The second surface a2of the cover body210and the second surfaces b12and b22of the first and second fixing portions220and230may have negative curvatures, i.e. may have concave shapes. When the momentum measurement device1000is fixed to the spherical kettlebell2000, the fixing portions220to230of the cover200may be securely fixed to the kettlebell2000due to concave shapes of the second surface a2of the cover body210and the second surfaces b12and b22of the first and second fixing portions220and230. In addition, when the momentum measurement device1000is fixed to the kettlebell2000, as shown inFIG.6B, the first side surfaces b13and b23of the first and second fixing portions220and230may contact the first and second side surfaces a31and a32of the cover body210, whereby the momentum measurement device1000may be securely fixed to the surface of the spherical kettlebell2000. FIGS.7A and7Bare views showing fixing portions of the cover of the momentum measurement device according to the embodiment in detail. In this embodiment, magnets M may be inserted into the fixing portions of the cover200, whereby the momentum measurement device1000may be securely fixed to the kettlebell. For example, the magnets M may be neodymium (Nd) magnets. However, the disclosure is not limited thereto. As shown inFIG.7A, the first surfaces, i.e. the upper surfaces, of the first and second fixing portions220and230may be provided with first holes h1and h2, respectively, and the magnets M may be inserted into the first and second fixing portions220and230through the first holes h1and h2, respectively. The diameters r1of the first holes h1and h2are less than the diameters of the magnets M. When the kettlebell2000is moved after the magnets M are inserted into the first holes h1and h2formed in the first and second fixing portions220and230, which are made of a flexible material, therefore, the magnets M may be prevented from being easily separated from the first and second fixing portions220and230through the first holes h1and h2, respectively. InFIG.7B, the second surfaces, i.e. the lower surfaces, of the first and second fixing portions220and230may be provided with second holes h3and h4, respectively. The diameters r2of the second holes h3and h4may be less than the diameters r1of the first holes h1and h2. When sweat is introduced into the first holes h1and h2of the first and second fixing portions220and230while the user exercises with the kettlebell2000having the momentum measurement device1000attached thereto, the sweat may be mainly discharged through the second holes h3and h4, whereby it is possible to prevent deterioration of the measurement sensor100including the magnets M. In addition, the second holes h3and h4are formed, whereby the flexible material constituting the first and second fixing portions220and230between the magnets M and the kettlebell2000is removed, and therefore magnetic force between the magnets M and the kettlebell2000may be increased. The diameters of the first holes h1and h2are less than the diameters of the magnets M. When the kettlebell2000is moved after the magnets M are inserted into the first holes h1and h2formed in the first and second fixing portions220and230, which are made of a flexible material, therefore, the magnets M may be prevented from being easily separated from the first and second fixing portions220and230through the first holes h1and h2, respectively. FIG.8is a flowchart showing a momentum measurement method according to an embodiment. In the momentum measurement method according to this embodiment, first, a momentum measurement device including an acceleration sensor and a gyroscope sensor may be fixed to exercise equipment (S100), and predetermined motion beat information may be output to a wireless terminal (S110). For example, beat information desired by a user or suitable for exercise with a kettlebell may be output to a speaker or a screen of a smartphone. The predetermined motion beat information may be a multiple of 40 beats per minute (BPM). Subsequently, the user may exercise with the kettlebell having the momentum measurement device fixed thereto. For example, the exercise may be based on swing, jerk, double cycle, or a combination of swing, jerk, and double cycle. At this time, the user may move the kettlebell in response to audio or video output from the smartphone (S120). At this time, an acceleration sensor and a gyroscope sensor in a measurement sensor provided in the momentum measurement device may measure movement of the kettlebell. In addition, during movement of the kettlebell, i.e. in the step of the user exercising, the predetermined motion beat information may be output from the wireless terminal, i.e. a sound having a predetermined rhythm or an image may be output from the smartphone, whereby the user may control exercise with the kettlebell in response to the rhythm sound or the image. Subsequently, measurement data of the acceleration sensor and the gyroscope sensor of the momentum measurement device may be transmitted to the wireless terminal, i.e. the smartphone, through wireless communication (S130). Subsequently, a processor provided in the wireless terminal may calculate movement of the measurement sensor100based on the received measurement data (S140). Subsequently, a determination unit provided in the wireless terminal may compare the movement of the measurement sensor with the predetermined motion beat information (S150). That is, the determination unit may determine whether the predetermined motion beat information, which indicates exercise motion intended by the user, and actual motion of the kettlebell by the user coincide with each other. Subsequently, an output unit of the wireless terminal may output the result of comparison between the movement of the measurement sensor and the predetermined motion beat information (S160). At this time, the user may check the result of their exercise with the kettlebell based on the comparison result output from a screen or a speaker of the smartphone. In the momentum measurement device, the momentum measurement system including the same, and the momentum measurement method using the same, it is possible for the user who exercises with the exercise equipment, such as the kettlebell, to check their exercise posture and momentum, whereby it is possible for the user to effectively exercise with the exercise equipment, such as the kettlebell, at home without help of a professional trainer. As is apparent from the above description, the momentum measurement device, the momentum measurement system including the same, and the momentum measurement method using the same have the following effects. It is possible for the user to control the speed and direction in movement of the kettlebell in response to audio or video output from the smartphone. Also, it is possible for the user to check their exercise posture and momentum using the acceleration sensor and the gyroscope sensor provided in the momentum measurement device attached to the kettlebell, whereby it is possible for the user to effectively exercise with the exercise equipment, such as the kettlebell, at home without help of a professional trainer. Although embodiments have been described above, the embodiments are merely illustrations and do not limit the present disclosure, and those skilled in the art will appreciate that various modifications and applications are possible without departing from the intrinsic features of the disclosure. For example, concrete constituent elements of the embodiments may be modified. In addition, it is to be understood that differences relevant to the modifications and the applications fall within the scope of the present disclosure defined in the appended claims.
18,766
11857841
Other features of the present embodiments will be apparent from the accompanying drawings and from the detailed description that follows. DETAILED DESCRIPTION OF THE DRAWINGS The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The description used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the description or explanation should not be construed as limiting the scope of the embodiments herein. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims. The terms “like,” “can be,” “shall be,” “could be,” and other related terms herein disclosed in the foregoing and later parts of the specification in any means do not limit or alter the scope of the present invention. Such terms are provided to facilitate a more complete understanding of the present invention and its embodiments. Exemplary embodiments are described with reference to the accompanying drawings. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. FIG.1illustrates a performance monitoring system100for evaluating at least one performance characteristic of a pitcher102, in accordance with the embodiments of the present invention. A pitcher as referred herein can be any player responsible to throw a ball while playing a game. A pitcher as referred to herein is a bowler in a game of baseball, however, the term “pitcher” should be construed as limiting to the game of baseball only. According toFIG.1, the performance monitoring system100is illustrated which includes the pitcher102, a pitching rubber104, a piezoelectric film106, a power source108, the transmitter110, an amplifier112, a receiver114, one or more processors116, a memory118, and a display device120. The performance monitoring system100includes the pitcher102which is a player throwing a ball to a catcher to strike out a batter. In another embodiment, the pitcher102is any person or individual who throws the ball towards a batsman aiming the strike zone and towards the catcher at high speed. The objective of the pitcher is to strike out the batsman by throwing a ball towards the strike zone at a speed high enough that the batsman may miss hitting the ball and enabling the catcher to catch the ball. The distance between the pitcher and the batter in a baseball field is approximately 18 meters. The pitcher in general is expected to throw the ball at a speed range of 70-90 mph in baseball. In an embodiment, the player throwing the ball may be playing any other game such as but not limited to, softball, cricket, throwball, and the like. Further, the area of the strike zone is dependent on the height of the batter. Therefore, in order for the pitcher to throw a ball towards a strike zone continuously at a high speed, consistency in performance is essential. The performance monitoring system100includes a pitching mound comprising the pitching rubber104. In general, the pitching mound refers to an area on the field from where the pitcher throws the ball. In addition, the pitching mound is typically a circular or an oval shaped area devoid of grass. The pitching mound includes pitching rubber104, which is placed at the center and is raised at a height of 10 inches above the height of the home plate. In an embodiment, the pitching mound is a step plate mound or a non-step plate mound. In yet another embodiment, the pitching mound can be of any type based on ground conditions. In an embodiment of the present invention, the pitching mound is designed to be used in facilities, such as, schools, colleges, professional ballparks, and the like. The pitching mound includes pitching rubber104, with piezoelectric film106and one or more rubber layers to cover piezoelectric film106. In an embodiment, the layers of rubber covering piezoelectric film106, which are placed on the pitching rubber104and the step plate, can be 5 or more. In an embodiment, the pitching mound may include the pitching rubber104and the step plate (not shown) on which the pitcher102places his or her foot while throwing the ball. In an embodiment, the step plate is composed of rubber and also includes piezoelectric film106. In an embodiment, the piezoelectric film106is placed on the pitching rubber104and the step plate in such a way that one-eighth of the pitching mound is covered by the piezoelectric film106. In an embodiment, each of the underlying elements is covered by the rubber on top. In an embodiment, the underlying elements are covered by 1 or more different types of layers to include molded, 3D printed or other material. In another embodiment, the pitching mound, including only the pitching rubber104and not the step plate, may include the piezoelectric film106covering one-fifth of the pitching mound. In an embodiment, the pitching rubber104may have piezoelectric films106placed on all four sides of the pitching rubber104. In an embodiment, the pitching rubber104may be periodically rotated to expose one of the four sides of the pitching rubber104. The piezoelectric film106includes one or more piezoelectric sensors. In an embodiment of the present invention, the piezoelectric film106is glued over the pitching rubber104or the step plate using an adhesive. In an embodiment, the pitching rubber104is periodically rotated to expose one of the four sides of the pitching rubber104to increase the life of the pitching rubber104. FIGS.2(a) and2(b)illustrate various alternative views of the pitching rubber104, in accordance with the first embodiment of the present invention. The pitching rubber104includes the one or more rubber layers202placed over the piezoelectric film106. In an embodiment, the one or more rubber layers202include at least five rubber layers. In an embodiment, the one or more rubber layers202protect the piezoelectric film106underneath. In addition, the one or more rubber layers202are placed over the piezoelectric film106to protect the underlying elements. In an embodiment of the present invention, the one or more rubber layers202are 3D-printed over the piezoelectric films106.FIG.2(b)discloses that the pitching rubber104includes piezoelectric film106on the edges towards the sides of the length of the pitching rubber104. FIG.2(c)illustrates the piezoelectric film106attached to the pitching rubber104connected to a transmitter through connecting leads, in accordance with an embodiment of the present invention. As shown inFIG.2(c)the piezoelectric film106provided on the pitching rubber104is electrically connected to the transmitter110through connecting leads204. The transmitter110transmits signals sensed from the piezoelectric sensors to the one or more processors116. In an embodiment, the transmitter110transmitting the electrical signals to the one or more processors116can be CC1050 or CC1070 from Chipcon Products from Texas Instruments. FIG.2(d)illustrates a step plate which configured for use with the pitching rubber104, in accordance to an embodiment of the present invention. The step plate as shown inFIG.2(d)is a step like structure with piezoelectric film106provided on each step surface and one or more layers of rubber202covering the piezoelectric film106. The performance monitoring system100ofFIG.1includes the power source108connected to the piezoelectric film106. In an embodiment, the power source108is electrically coupled to one or more piezoelectric sensors which, in turn, are coupled to the piezoelectric film106. In an embodiment, the pitcher102exerts pressure on pitching rubber104and the piezoelectric film106while pitching the ball. As a result, the exerted pressure is sensed by the piezoelectric film106, which produces an electrical charge proportional to the pressure exerted. The one or more piezoelectric sensors may include the amplifier112to convert and amplify the electrical charge to a voltage output. Pressure exerted by the pitcher102is thereby converted into electrical signals by the piezoelectric film106; the electrical signals are sensed by the piezoelectric sensors. Power source108powers the amplifier112and the transmitter110. The power source108may include one or more chargeable or replaceable batteries. In an embodiment, the power source108may include a secondary battery for emergency use. In an embodiment, the power source108may be connected to a power line supplying the electrical power to the piezoelectric film106and the transmitter110. In cases where the power supply in the power line is disrupted, the secondary battery may be used to power the piezoelectric film106and the transmitter110. Actuation of the secondary battery may be triggered manually at the discretion of the user, or automatically in response to disruption of the power line. Further, the power source108is connected to a detecting means enabled by the one or more processors116for detecting disruption of power in the power line or insufficient electrical power stored in the power source108or the secondary battery as the case may be. A low battery signal may be generated and transmitted by the transmitter110and received by the receiver114connected to the one or more processors116. As a result of the low battery signal, a notification may be displayed by the detection means enabled by the one or more processors116on the display device120. The notification may alert the user to restore the power or replace the batteries as the level of power stored in the power source108is below a threshold level. In an embodiment, the threshold level of the power stored in the power source at which an alert is displayed is, but is not limited to, 20%, 10% and/or 5%. In an embodiment, the trigger warning may be enhanced by actuating an alarm together with the notification if the battery power level is below 5%, indicating that the battery may be recharged or replaced to enable proper functioning of the piezoelectric film106. In an embodiment, the display device120may be a device operated by one or more users such as customers of an entity or a brand, where the entity can be a provider of items, including products and services. The display devices120can include a variety of computing systems, including but not limited to a laptop computer, a desktop computer, a notebook, a workstation, a portable computer, a personal digital assistant, a handheld device and a mobile device. The performance monitoring system100, including the one or more processors116, may use one or more algorithms related to machine learning or artificial intelligence, which are trained using training data. In an embodiment, the training data includes but may not be limited to speed data, acceleration data, pace data, energy data, power data, and medical history data of one or more players. In addition, the training data refers to historical data and real-time data associated with the pitcher's historical optimal performance. One or more algorithms may use one or more statistical and analytical related algorithms to define threshold levels to determine a score related to each of the performance parameters. One or more algorithms used may be implemented using various machine learning trained models, deep learning models, artificial neural networks, fuzzy logic control algorithms, and the like. The artificial intelligence can be implemented by the one or more processors and memory. The processors can dynamically update the computer-readable instructions based on various learned and trained models. In an embodiment, the performance parameters include a score which indicates a level of performance of the pitcher such as: good, average and below average. In an embodiment, the thresholds may be updated in real-time based on the real-time performance of a player to measure consistency in the performance of the players during a game such as softball, baseball, cricket or throwball. In an embodiment, a universal set of thresholds are defined which may be applicable to all players, along with a custom set of thresholds that are unique to each player. In an embodiment, a comparative analysis of performance may be carried out by comparing the performance of one or more players with another based on the universal set of thresholds and a custom set of thresholds. In an embodiment, threshold data includes one or more threshold sets and levels of each of the performance parameters such as but not limited to pitcher's individual kinetic, musculoskeletal ability, and/or aerobic capacity. The threshold levels may indicate a poor threshold range, an acceptable or an average performance threshold range and a good performance threshold range for each performance characteristic. The one or more processors116then analyze the electrical signals based on the training data using machine learning algorithms. In an embodiment of the present invention, the machine learning algorithms are utilized by the one or more processors116to determine performance parameters of the pitcher to indicate how the pitcher is performing during a game, based on the threshold ranges defined. The one or more processors116may determine one or more statistics and provide prediction information related to the performance characteristics of the pitcher102. The at least one performance characteristic of the pitcher102may include but not be limited to the pitcher's individual kinetic, musculoskeletal ability, and/or aerobic capacity. In an embodiment, the one or more processors116may also utilize the machine learning algorithms to assess the effectiveness and tiredness of the pitcher102during a game. In an embodiment, the machine learning algorithms improve the predictive modeling and analytics of the performance monitoring system100. In an embodiment, the analysis of the electrical signals is compared with the training data to evaluate optimal corrective and performance training exercises, which may serve as blueprints for tracking the performance characteristics. In an embodiment, the one or more processors116are configured to track the effectiveness of the pitcher102and tiredness of the pitcher102. In addition, the one or more processors116are configured to predict pitcher's effectiveness for the upcoming innings or games. In an embodiment, the one or more processors116are configured to predict pitcher's onslaught of injuries and generate blueprints of optimal corrective and performance training exercises that may prevent injuries prevalent during the game or specific to a particular pitcher based on his performance and injury history. The performance monitoring system100includes a memory118. The one or more processor(s)116may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s)116are configured to fetch and execute computer-readable instructions stored in memory118of the performance monitoring system100. The memory118may store one or more computer-readable instructions or routines, which may be fetched and executed to create or share the data units over a network service. The memory118may comprise any non-transitory storage device, including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like. The performance monitoring system100comprises the display device120. In an embodiment of the present invention, the display device120includes but may not be limited to a smartphone, a computer, and a television. In an embodiment of the present invention, the one or more processors116are configured to display at least one performance characteristic of a pitcher on the display device120. In an embodiment, the one or more processors116may provide a comparative analysis of the performance of one or more pitchers to a user of the display device120. FIG.3(a)illustrates various alternative views of the pitching rubber104, in accordance with an embodiment of the present invention. According toFIG.3(a), the pitching rubber104is illustrated in rectangular shape with the one or more rubber layers202and the piezoelectric film106. According toFIG.3(b), the pitching rubber104is illustrated in a shape having a rectangular body and a step extension on the length side of the rubber104with the one or more rubber layers202and the piezoelectric film106covering the rubber104. FIG.4illustrates a four-sided pitching rubber104, in accordance with the second embodiment of the present invention. According toFIG.4, the pitching rubber104is shown to have piezoelectric film106placed on all four sides and edges on the length of the pitching rubber104. The pitching rubber104comprises a hollow cylinder in the center of the pitching rubber104. In use the pitching rubber104is filled with mud in the hollow cylinder and the piezoelectric film106along with sensors generate electrical signals when a pitcher exerts pressure on the pitching rubber104with his foot while pitching a ball. In an embodiment, the pitching rubber104is associated with spikes and or dual stanchion structures. In an embodiment, the pitching rubber104may be installed on or about the center of the mound. FIG.5illustrates a perspective view of the pitching rubber104, in accordance with the second embodiment of the present invention. FIG.6illustrates an exemplary computer system600to implement the proposed system in accordance with embodiments of the present invention. As shown inFIG.6, the computer system can include an external storage device610, a bus620, a main memory630, a read-only memory640, a mass storage device650, communication port660, and a processor670. A person skilled in the art will appreciate that the computer system may include more than one processor and communication ports. Examples of processor670include, but are not limited to, an Intel® Itanium® or Itanium 2 processor(s), or AMD® Opteron® or Athlon MP® processor(s), Motorola® lines of processors, FortiSOC™ system on a chip processor or other future processors. Processor670may include various modules associated with embodiments of the present invention. Communication port660can be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. Communication port660may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system connects. Memory630can be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. Read-only memory640can be any static storage device(s), e.g., but not limited to, a Programmable Read-Only Memory (PROM) chips for storing static information, e.g., start-up or BIOS instructions for processor670. Mass storage650may be any current or future mass storage system, which can be used to store information and/or instructions. Exemplary mass storage systems include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), e.g. those available from Seagate (e.g., the Seagate Barracuda 7102 family) or Hitachi (e.g., the Hitachi Deskstar 7K1000), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g., an array of disks (e.g., SATA arrays), available from various vendors including Dot Hill Systems Corp., LaCie, Nexsan Technologies, Inc. and Enhance Technology, Inc. Bus620communicatively couples processor(s)670with the other memory, storage and communication blocks. Bus620can be, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects processor670to the software system. Optionally, operator and administrative interfaces, e.g., a display, keyboard, and a cursor control device, may also be coupled to bus620to support direct operator interaction with the computer system. Other operator and administrative interfaces can be provided through network connections, which are connected through communication port660. External storage device610can be any kind of external hard drives, floppy drives, IOMEGA® Zip Drives, Compact Disc—Read-Only Memory (CD-ROM), Compact Disc—Re-Writable (CD-RW), Digital Video Disk—Read Only Memory (DVD-ROM). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present invention. Embodiments of the present invention may be implemented entirely by hardware, entirely by software (including firmware, resident software, micro-code, and the like) or combining software and hardware implementation that may all generally be referred to herein as a “circuit,” “module,” “component,” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product, comprising one or more computer-readable media having computer-readable program code embodied thereon. Thus, it will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and the like represent conceptual views or processes illustrating systems and methods embodying this invention. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the entity implementing this invention. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular configuration, method or operating system named. As used herein, and unless the context dictates otherwise, the term “coupled to” is intended to include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms “coupled to” and “coupled with” are used synonymously. Within the context of this document the terms “coupled to” and “coupled with” are also used euphemistically to mean “communicatively coupled with” over a network, where two or more devices are able to exchange data with each other over the network, possibly via one or more intermediary device. While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art. Having thus described the invention in rather full detail, it will be understood that such detail need not be strictly adhered to, but that additional changes and modifications may suggest themselves to one skilled in the art, all falling within the scope of the invention as defined by the subjoined claims.
24,855
11857842
EMBODIMENTS InFIG.1a schematic top view of an apparatus1in accordance with at least some embodiments of the present invention is illustrated. The shown apparatus1is a wrist-watch. The apparatus1comprises at least one processing core and at least one memory including computer program code. The at least one memory and the computer program code are configured to, with the at least one processing core, cause the apparatus1at least to receive a first signal3from an exercising device2, process the received signal, respond to the received signal by transmitting a second signal4to the exercising device2, and participate in a pairing process5with the exercise device2. In other words, when a user6wearing the wrist-watch1is located close to an exercising device2or starts to use the exercising device2, the wrist-watch1and the exercising device start to communicate with each other. A first signal as indicated by arrow3from the exercising device2is transmitted to the wrist-watch1. Then the received signal3is processed by the processing core of the wrist-watch1. Subsequently, a second signal as indicated by arrow4is transmitted from the wrist-watch1to the exercising device2. This process is called pairing5. Data between the wrist-watch1and the exercising device2can now be transferred between the wrist-watch1and the exercising device2. Data is typically transferred using low power wireless communication technology such as Bluetooth, Bluetooth Low Energy, or Wibree. The exercising device2shown inFIG.1is a treadmill. The treadmill belt of the exercising device2may be moving with a specific speed as indicated by arrow8as a user6is running on the belt. At the same time, the arms of the runner6move cyclically as indicated by arrow7. Data may be determined by the sensors of the wrist-watch1. Examples of such determined data are a heart-beat rate, number of steps during a certain period of time, or acceleration data. Data may also be determined by sensors of the exercising device2and transmitted to the apparatus1. An example of such data is the speed of the moving treadmill belt of the exercising device2. The information about the speed of the treadmill belt may be transmitted from the exercising device1to the wrist-watch1. The information about the speed of the treadmill belt may then be displayed on the wrist-watch. In other words, the wrist-watch1is configured to serve as a display of the exercising device2. Of course, also data determined by at least one sensor of the wrist-watch1may be displayed on the display of the wrist-watch. The user6may further choose which data is displayed. According to certain embodiments, the exercising device2may also comprise an additional display and data may be transmitted from the wrist-watch1to the exercising device2. A user6may choose which information is shown on the display of the exercising device2and which information is at the same time displayed on the display of the wrist-watch1. In other words, the user6may choose which sensor data is displayed on the display of the wrist-watch1and which sensor data is displayed on the display of the exercising device2. According to certain other embodiments, the apparatus1is configured to control parameters or functions of the exercising device2after the pairing process5. In the shown example, a user6may control the speed of the treadmill belt of the exercising device2as indicated by arrow8via a user interface of the wrist-watch1. A user interface of the wrist watch1may be a touchscreen or at least one button, for instance. User instructions to change the speed of the treadmill belt may be transmitted from the wrist-watch1to the exercising device2and processed by the exercising device2, thus causing the exercising device2to change the speed of the treadmill belt. According to this embodiment, the procedure is typically fully or at least partially controlled by the exercising device2such that no program code or a minimum amount thereof needs to be installed on the mobile device1. The mobile device1serves as a user interface for the exercising device2. In other words, a computer program comprising program instructions which, when loaded into the exercising device2, cause e.g. graphical user interface data to be determined for the mobile device1is provided. The graphical user interface data is wirelessly transmitted to the mobile device1from the exercising device2to provide at least one user interface functionality on the mobile device1. Then data corresponding to user input is received and wirelessly transmitted to the exercising device2. Minimum system requirements such as processing capacity and memory capacity are required for the mobile device1. According to this embodiment, the input data is completely or at least partially processed by the exercising device2. The bidirectional communication link between the mobile device1and the exercising device2may be used to enable the exercising device2to act as a server having control over the user interface and the mobile device1to act as a client whose content is fully or at least partially controlled by the exercising device2. Of course, the apparatus1and/or exercising device2may be also configured to store and process sensor data received from a wearable sensor or any other external sensor, for example a MOVESENSE sensor. Such sensor data may be wirelessly transferred to the apparatus1or the exercising device2directly or to the apparatus1first and then to the exercising device2. According to a certain embodiment, an external sensor (not shown), for example a MOVESENSE sensor, is attached to a user and connected to the apparatus1, for example a wrist watch1. When the user comes to an exercising device2, the apparatus1automatically displays information. Simultaneously, the apparatus1receives instructions from the exercising device2. However, also the exercising device2may receive data from the apparatus1and/or the external sensor. The data may, for example, include personal data, sensor data and/or external sensor data. The data is typically processed by the exercising device2. This kind of user experience is automatically created. When the user changes the exercising device2, for example in a gym, the displayed information on the display of the apparatus1also automatically changes. The exercising device2can additionally receive further data from a server or via the internet. External sensor data can be analysed by the exercising device and content, for example information derived from the sensor data, can be automatically displayed on the apparatus1. In such a situation, the exercising device2may be used to enable the exercising device2to act as a server having control over the user interface and the mobile device1to act as a client whose content is fully or at least partially controlled by the exercising device2. InFIG.2a schematic side view of another apparatus1in accordance with at least some embodiments of the present invention is illustrated. The shown apparatus1is a mobile device such as a tablet or other mobile device. The shown exercising device2is an ergometer or indoor exercise bike. After the pairing process as described above in connection withFIG.1, parameters and/or logics such as an app are transmitted from the exercising device2to the apparatus1. The apparatus1is configured to store and process program code received from the exercising device2. The apparatus1is configured to serve as a display of the exercising device2. For example, a video simulation of a cycling track may be displayed on the display of the apparatus1. Thus, the user6can cycle along the simulated track. Sensors of the exercising device2may determine the cycling speed of the user6, for example. The sensor data of the exercising device2is then transmitted to the apparatus1. The sensor data can be used as input data for the video simulation displayed on the apparatus1. In other words, the user6can cycle along the virtual track with varying speeds. The visualization of the virtual cycling simulation is calculated based on the speed data obtained from the sensor data of the exercising device2. On the other side, data may be transmitted from the apparatus1to the exercising device2, thus causing the exercising device to change a parameter. Altitude data along the virtual track stored in the app may be provided, for instance. The altitude data can be used as input data for the parameters of the exercising device2as a function of time. When the data is received by the exercising device2, it causes the exercising device2to change the resistance of the exercise bike during cycling along the virtual track. In other words, cycling upwards or downwards along the virtual track can be simulated. The exercising device2is configured to transmit sensor data to the apparatus1and to receive in response input parameters from the apparatus1. Consequently, cycling along a virtual track, for example a passage of the Tour de France, can be simulated. The exercising device2may be, for example, located in a gym and different users may subsequently cycle along the virtual track. When each user bring his/her own apparatus1to the gym, for each user a period of time may be determined by the app for cycling from the beginning of the virtual track to the end of the virtual track. The period of time for each user may then be transmitted from the respective apparatus1to the exercising device2and stored in a memory of the exercising device2. The different periods of time may be ranked and listed so that a user can see his/her results in comparison to the results of other users. Thus, it is possible to simulate a cycling competition, for instance. Of course, the apparatus1may also be used for displaying only information such as cycling speed, length of cycling session period or for selecting a cycling resistance of the exercising device2. Data determined by sensors of the exercising device2may be received by and stored in the apparatus1. Alternatively, data determined by sensors of the exercising device2may be received by the apparatus and stored in the cloud. Thus, the user6can analyse the stored data at a later stage by reading out the memory of the apparatus1or viewing a webpage in the internet. InFIG.3a schematic side view of an exercising device2in accordance with at least some embodiments of the present invention is illustrated. In the shown embodiment, the exercising device2is a rowing machine. The apparatus1may be a tablet computer, for instance. The exercising device2comprises at least one processing core13and at least one memory14including computer program code. The at least one memory14and the computer program code are configured to, with the at least one processing core13, cause the exercising device2at least to transmit a first signal3to an apparatus1, receive a second signal4from the apparatus1, and participate in pairing5with the apparatus1. Subsequent to the pairing process5, program code to be stored and processed by the apparatus1can be transmitted from the exercising device2. Parameters and/or logics such as a rule engine, an app, a classification recipe or a html web page can be transmitted to the apparatus, for instance. For example, an app may be transmitted to the apparatus1. A user can select a training program with the help of the app. During the training session, the exercising device2may, for example, transmit a recipe or an instruction to the apparatus1how to analyse movements of a user6. The movements of the user may be determined or recorded using sensors of the exercising device2. Examples of such sensors of the exercising device are force sensors and acceleration sensors. Data determined by the sensors of the exercising device2may be shown on a display15of the apparatus1. A user can further input data using a user interface17of the apparatus1. A user interface17may be, for example, a touchscreen, a button, a keyboard or an optical system analysing gestures of the user. The exercising device2is capable of receiving the data which has been input via the user interface17of the apparatus1. The exercising device2is capable of receiving instructions from the apparatus1. For example, another training program may be selected. According to certain embodiments, a first exercising device2and a first apparatus1in accordance with at least some embodiments form a first unit and a second exercising device2and a second apparatus1in accordance with at least some embodiments form a second unit. The first unit and the second unit are capable of communicating with each other. For example, rowing of a rowing boat having two seats can be simulated. Subsequent to starting of a specific training program, two users simultaneously using respective exercising devices have to synchronize their movements in order to row a virtual rowing boat. A first user is then virtually in the position of the person sitting in front of the other person. Thus, a sports team can train rowing of a rowing boat, for example in winter time when training with a real rowing boat is not possible due to weather conditions. InFIG.4a schematic top view of a further apparatus1in accordance with at least some embodiments of the present invention is illustrated. The exercising device2includes an audio system9. After the pairing process5as described above in connection withFIG.1, a music program can be started or stopped, a volume of music can be controlled and/or a title can be selected using the mobile device1in the form of a wrist-watch1. According to this embodiment, the procedure is typically fully or at least partially controlled by the exercising device2such that no program code or a minimum amount thereof needs to be installed on the mobile device1. The mobile device1serves as a user interface for the exercising device2. In other words, a computer program comprising program instructions which, when loaded into the exercising device2, cause e.g. graphical user interface data to be determined for the mobile device1is provided. The graphical user interface data is wirelessly transmitted to the mobile device1from the exercising device2to provide at least one user interface functionality on the mobile device1. Then data corresponding to user input is received and wirelessly transmitted to the exercising device2. Minimum system requirements such as processing capacity and memory capacity are required for the mobile device1. According to this embodiment, the input data is completely or at least partially processed by the exercising device2. The bidirectional communication link between the mobile device1and the exercising device2may be used to enable the exercising device2to act as a server having control over the user interface and the mobile device1to act as a client whose content is fully or at least partially controlled by the exercising device2. Alternatively, a provided further, second mobile device (not shown), for example a smartphone, may include an audio system. In such a case, a music program can be started or stopped, a volume of music can be controlled and/or a title can be selected using the wrist-watch1after the pairing process5between the apparatus1and the exercising device2. In other words, the apparatus1may be used to additionally control functions of a further second mobile device. According to this embodiment, the procedure is typically fully or at least partially controlled by the second mobile device such that no program code or a minimum amount thereof needs to be installed on the mobile device1such as a wrist-watch. The mobile device1serves as a user interface for the second mobile device. In other words, a computer program comprising program instructions which, when loaded into the second mobile device, cause e.g. graphical user interface data to be determined for the mobile device1is provided. The graphical user interface data is wirelessly transmitted to the mobile device1from the second mobile device to provide at least one user interface functionality on the mobile device1. Then data corresponding to user input is received and wirelessly transmitted to the second mobile device. Minimum system requirements such as processing capacity and memory capacity are required for the mobile device1. According to this embodiment, the input data is completely or at least partially processed by the second mobile device. The bidirectional communication link between the mobile device1and the second mobile device may be used to enable the second mobile device to act as a server having control over the user interface and the mobile device1to act as a client whose content is fully or at least partially controlled by the second mobile device. InFIG.5a schematic side view of another exercising device2in accordance with at least some embodiments of the present invention is illustrated. The shown exercising device2is an ergometer in the form of an indoor exercise bike. The exercising device2comprises at least one processing core and at least one memory including computer program code. The at least one memory and the computer program code are configured to, with the at least one processing core, cause the exercising device at least to transmit a first signal to an apparatus1, receive a second signal from the apparatus1, and participate in pairing with the apparatus1. The exercising device2further comprises a video system10. The video system10may be, for example, a TV, a tablet, or a PC. The mobile device1may be used as a remote control of the video system10. A wrist-watch is shown as the apparatus1. Typically, the exercising device2is configured to transmit the first signal and to receive the second signal when a distance between the apparatus1and the exercise device2is about 0 m-10 m. In other words, the pairing process is activated when a user6with the wrist-watch1is moving closer to the exercising device2. After the pairing process between the apparatus1and the exercising device2, data can be transmitted between the apparatus1and the exercising device2. For example, the user6may select a TV channel to be shown on the video system10by the wrist-watch1. The wrist-watch1can therefore serve as a remote control when cycling. Alternatively, data obtained by sensors of the apparatus1, for example heart beat data, and data obtained by sensors of the exercising device2, for example speed data, may be displayed on the video system10. The exercising device2is configured to participate in the pairing process during a session with the apparatus1. The session is based on sensors of the apparatus1and the exercising device2. It is to be understood that the embodiments of the invention disclosed are not limited to the particular structures, process steps, or materials disclosed herein, but are extended to equivalents thereof as would be recognized by those ordinarily skilled in the relevant arts. It should also be understood that terminology employed herein is used for the purpose of describing particular embodiments only and is not intended to be limiting. Reference throughout this specification to one embodiment or an embodiment means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Where reference is made to a numerical value using a term such as, for example, about or substantially, the exact numerical value is also disclosed. As used herein, a plurality of items, structural elements, compositional elements, and/or materials may be presented in a common list for convenience. However, these lists should be construed as though each member of the list is individually identified as a separate and unique member. Thus, no individual member of such list should be construed as a de facto equivalent of any other member of the same list solely based on their presentation in a common group without indications to the contrary. In addition, various embodiments and example of the present invention may be referred to herein along with alternatives for the various components thereof. It is understood that such embodiments, examples, and alternatives are not to be construed as de facto equivalents of one another, but are to be considered as separate and autonomous representations of the present invention. Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of lengths, widths, shapes, etc., to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention. While the forgoing examples are illustrative of the principles of the present invention in one or more particular applications, it will be apparent to those of ordinary skill in the art that numerous modifications in form, usage and details of implementation can be made without the exercise of inventive faculty, and without departing from the principles and concepts of the invention. Accordingly, it is not intended that the invention be limited, except as by the claims set forth below. The verbs “to comprise” and “to include” are used in this document as open limitations that neither exclude nor require the existence of also un-recited features. The features recited in depending claims are mutually freely combinable unless otherwise explicitly stated. Furthermore, it is to be understood that the use of “a” or “an”, that is, a singular form, throughout this document does not exclude a plurality. INDUSTRIAL APPLICABILITY At least some embodiments of the present invention find industrial application in displaying of sensor data determined by at least one sensor of an exercising device and at least one sensor of a mobile device. Certain embodiments of the present invention are applicable in health care, in industry, in working environments, sports, etc. REFERENCE SIGNS LIST 1apparatus2exercising device3first signal4second signal5pairing6user7arrow8arrow9audio system10video system11processing core of apparatus12memory of apparatus13processing core of exercising device14memory of exercising device15display16sensor17user interface18server
22,747
11857843
DETAILED DESCRIPTION The DMRM's unique modular functionality allows it to attach or mount to various traditionally used force equipment (e.g., barbells, racks, benches) as well as use in other physical activities. The DMRM includes a full closed/feedback loop motor control of adjustment and refinements based upon the user's dynamic or profiled reaction to the force being performed, in real-time. This allows the user to utilize numerous muscle groups at once in an almost limitless number of physical activity forces and ranges of motion. The varying forces are based on applied user force and limits the likelihood of injury. Furthermore, the present invention has less mass than the traditional static weight plate equivalent, therefore, accidentally dropping the apparatus on a toe or finger, would likely cause less injury to the user. The DMRM is accessible to users of various strength levels and can be easily transported. The modularity, combined with the novel means of replicating varying forces, and the lighter mass make the DMRM unlike any other force equipment. The DMRM may be used for a variety of types of physical activity. This includes exercise, boundary constraints, safety modules and two-person interactive activities, in varying configurations and mounting positions. FIG.1shows an example of a modular, standalone Dynamic Motion Resistance Module1. Although some of the exemplary embodiments described herein are tailored to a stand-alone module, the present disclosed apparatus and methods are not limited to this configuration and can be used in other apparatus environments using similar applications and methods. One or more modules may be mounted or anchored to the equipment being used. As illustrated inFIG.1, the apparatus includes an open hub13, that is sized to fit on varying types of equipment, such as Olympic or standard Barbell and Dumbbell components. The outer shell10houses the dynamic force components including a motor, such as a DC motor, a power source, a smart controller/wireless communication, sensors, an embedded processor and a cable or strap spool4. The module may also include a display. Cable or strap spool4of the DMRM1provides a connection point5to attach hand grips, bars, or fixed points for the user to use the attached module. Sensors may include Hall effect, strain gauge, or safety interlocks as well as external physiological sensors such as heart rate, forces, timing, workout form, calorie burn, workout repetition speed and workout history. The force sensors are located within the logical force sensor module however, the exact physical location may vary for applications other than DMRM specific. The sensor feedback may be audible, tactile and/or haptic. DMRM1is fitted onto internal rotational part13providing varying forces to the strap or cable4in a linear direction2, such that the user experiences a varying force based upon sensor control and calculated inputs to optimize the physical activity session. DMRM1also accommodates placard and branding space16. FIG.2shows an exemplary illustration of the inside of DMRM1and internal force functionality demonstrating the major components applied in delivering the dynamic forces including the resulting linear vector of force2, created by the internal rotational3force and a typical communication device9sending the commands for varying forces to the module. The torque-to-linear force is generated by the motor, gearing, pulleys, or Eddy force component6, powered by a supply source7, for example, batteries or line power. The forces and communication are handled by an internal processor, wireless radio, and force sensor module8(“force sensor module”) acting both as an apparatus tracking measurement unit (“ATMU”) and a self-contained integrated DMRM (offline/manual mode) alternately receiving control commands from a commercially available external device9, acting as an apparatus tracking processing unit (“ATPU”). The ATMU measures apparatus/module data and uses an electronic communications channel to transmit the measured data to the ATPU. A second electronic communications channel is used by the ATMU to transmit one or more of the apparatus conditions data to the user interface to adjust dynamic forces. The user interface, either local on the device or an associated application, is used to adjust all forces and physical activity profiles. The ATPU includes a microprocessor and a memory storage area. The memory storage area includes a database and a tracking processor module. The tracking processing module includes program instructions and algorithms that, when executed by the microprocessor, determines one or more tracking parameters using the measured data and a set of evaluation rules and the apparatus and/or module conditions measured by the ATMU, using one or more of the tracking parameters and another set of evaluation rules. The database stores the sets of evaluation rules. At least one set of rules corresponding to one or more of the personal tracking parameters, such as repetitions per minute, total repetitions, calories burned, and goals achieved, another set of evaluation rules corresponding to the one or more conditions of the apparatus and/or module. The embedded processor of module1monitors the electronic motor control loop, sensor management and wireless communications, such as Bluetooth Low Energy (BLE), Wi-Fi or cell. The embedded processor provides local control and calculations and variables, such as main power, timers, motor control profile, start/stop, effective forces, and safety interlock status. It can also provide the ATPU with calculated or raw data so higher-level calculations can be performed at either boundary of the architecture. The ATPU is a logical element that may be physically located within the DMRM or in the user interface. The ATPU transmits the apparatus conditions such as battery charge status, safety status and system health. The optimized linear forces are directed to cable or strap4. Cable or strap4includes an attachment point5, such as a cleat, an eyehook or other common or custom attachment points, to allow a variety of accessories and attachment options to cable or strap4. When the module is “off-line” it can be in either low power sleep mode or powered off. FIGS.3aand3billustrate an embodiment of the DMRM1in practice with application of forces and internal force functionality mounted on a typical exercise barbell or dumbbell rod30. The resulting vector of force2may be accommodated by an internal Industry Standard/Common Barbell or Dumbbell rod30or other common hub adaptations for the module to connect/mount. Strap or cable4and attachment point5are in a linear direction, such that the user experiences a varying force based upon rate, form, pre-planned exercise routines, sensor and/or calculated inputs to optimize a physical activity session. DMRM1includes multiple safety mechanisms, such as cable safety stops (cut-off switch), anchor points (foot anchor18inFIG.3aor floor anchor17inFIG.3b), and/or hardware/software control loops and feedback loops (Sensor, Electronic, Software) for a real-time closed loop controlled and dynamic force application. Foot anchors18counteract the applied forces for a dynamic free weight experience. FIGS.4a,4b, and4cillustrate DMRM1being used with weight bench40. DMRM1is mounted on bar30. The user is able to perform a variation of exercises with different ranges of vector of force2.FIG.5illustrates the use of DMRM1on rowing machine50. The user interface9can be part of the rowing machine or can be a separate user interface such as a smartphone. Two DMRM1are attached to rower50, however the number of modules attached to the equipment can be one or more. The user pulls on cables4while rowing on rowing machine50and receives real-time feedback and a haptic sensation of actually rowing in water. FIGS.6,7,8and9show exemplary illustrations of additional uses with DMRM1. In addition to mounting the DMRM to traditional exercise equipment, static weight plates14may be added as seen inFIG.6. DMRM1may be mounted in other ways, for example, DMRM1may be mounted to one or more anchor points70on a load bearing structure and then attached to a swimmer's harness15to adjust or measure dynamic physical activity force while swimming (FIG.7). As seen inFIG.8, DMRM1may also be used for two-person interactive exercises or therapy activities. One user holds the onto barbell80where two modules are mounted, for example, while the other user attaches a barbell (or other form of equipment)85to strap or cable4via attachment point5. Another example, shown inFIG.9, attaches DMRM1to an animal or pet by a harness or leash12, for example. DMRM1provides freedom of movement for the animal, unless the animal reaches the user set boundary. Once the set boundary92is reached, dynamically applied forces begin to apply resistance leading to a full stop (a hold or lock mode, for example) at a controlled length and containment. FIGS.10,11and12provide additional alternate uses of DMRM1.FIG.10shows attaching DMRM1to treadmill100at attachment point102and attaching cable or strap4to the user's waist by a harness or other connection point104keeping the runner perfectly centered on treadmill100. DMRM1may also be used as a safety arresting module, such as inFIG.11, attached to a user at connection point110, such as a harness, providing freedom of movement to the user (human or animal). If, or when, a spurious force is detected, such as a fall or trip, the apparatus holds or locks, securing the user.FIG.12illustrates use by a sprinter or skater in which DMRM1is attached to the user by a harness or other connection point19during training. The apparatus senses and controls the applied forces to the user. The module can additionally be profiled and used for static force routines with programmable forces and hold times, adapted to the daily physical activity or to add the same elements of closed loop force adjustments to other physical exertion applications and therapies. The sensors discussed above may be packaged separately as a force sensor module, and, when used within the DMRM, provide real-time measurement and tracking of forces experienced by the user at the tangent force vector. This allows the user to utilize numerous muscle groups at once in an almost limitless number of physical activity forces and ranges of motion. The varying forces are based on applied user force and limits the likelihood of injury, although a user has the option to set a desired static force. A force sensor module may be used within the DMRM as discussed above, or in other electromechanical motors, such as an e-bike. The force sensor module is a torque measurement system that provides real-time tracking and motor control for adjusting standard and dynamic linear-to-torque forces. The measurement system of the force sensor module includes a unique arrangement of single axis levered load cells, such that rotational force can be measured. The packaging of either a half or full Wheatstone bridge analog measurement from the load cells can be accurately calibrated and tracked for forward and reverse torque, at the point of tangential conversion. The Force Sensor Module is functionally comprised of the ATMU and ATPU modules, sensors, an internal processor, wireless radio, a power source and a user interface. The sensors section may include Hall Effect (and/or accelerometer, gyro meter, magnetometers, proximity, optical/proximity sensors) for positional information, Strain Gauges/Load Cells (for example, Force Sensitive Resistor/Common Load Cells, Piezo, optic, or torsional sensor) for forces, contact closures or proximity detection for safety interlocks and the motor controls. Torque-to-linear forces are measured during physical activity in real-time, with the apparatus including a force sensing module, an electromechanical motor, processors, and a user interface device. When integrated, the system includes one or more sensors measuring data for physical activity efficiency, the force sensor module, a variable length cable, a force generating component, and the closed loop motor controls. The force sensor module communicates the measured resistance at the point of tangential dynamic forces, experienced by the user at the end of a cable/strap or at the DMRM mounted position. The ATPU, which is part of the force sensor module, includes the first electronic communications channel for receiving the measured data from the ATMU, a motor controller, a microprocessor, a memory storage area, a database stored in the memory storage area, and the logic forming a tracking processing module. All of the logical components of the ATPU may be located separately or combined into one circuit board. The ATPU will determine the rate, cable length and resulting force, when the user applies a counter force to a prescribed exercise mode and current user settings. Within the ATMU is a torque sensor module that provides real-time feedback from lever-based load cells, such as strain gauge, packaged within an electromechanical motor, flywheel or other static resistance sections. The rotor has torsional freedom of motion in rotational motor direction. A load wedge is connected to the rotor and transfers the rotational force of the rotor motion to the levered section of the load cells, forming compression forces. The compression forces (measured as a voltage drop across a resistance) are converted to torque forces and can be used to provide closed loop motor control of user experienced forces, during a physical activity. The raw analog load cell data (arranged as Half or Full-Bridge) is converted by the ATMU using a Digital To Analog (DAC) converter and can be wirelessly communicated to the ATPU for further processing. In an embodiment of the force sensor module of the present invention, a slip bearing is formed between a motor rotor and a load cell mounting ring, allowing the forward and reverse forces to be measured. The load cells may be mounted on opposing angles and relative to the rotational center axis. A load wedge may be attached to a rotor section of the motor at a tangential transition, such that a force is applied to the load cell levered section. One load wedge may be used for a half bridge configuration and two load wedges for a full bridge configuration. The force sensor module may be packaged within a DMRM or used in similar applications where sensor information is wirelessly communicated between a rotor and stator. This wireless communication between the rotor and stator of the present invention can be used within motor control applications involving positional, rotational speed and force sensor communications. The tracking process includes program instructions and algorithms that when executed by a microprocessor, causes the microprocessor to determine one or more tracking parameters using the raw data measured by the ATMU. For example, sending control signals to the resistance generating component of a user device with a first set of evaluation rules, and determining one or more apparatus condition parameters, using one or more previously established tracking parameters, with a second set of evaluation rules. The flow and functionality of the force sensor module system is as follows: The ATPU receives digital force and positional information from the ATMU and sensors, such as Hall Effect Positional Data, Voltage, Current Usage, Speed, and other secondary motor parameters from the motor controller. The ATPU filters, prioritizes, processes, and provides motor control parameters back to the controller for the next set points. The communication and control are tightly coupled for minimal signal delay and therefore can provide dynamic feedback during physical/work activity, thus feeling seamless to the user's experience. The present invention simulates real-world forces such as rowing, swimming, runner start force and other physical work-related activities as a learn, replicate and improve simulation. This arrangement of the force sensor module of the present invention having a rotor wirelessly communicating sensor force and positional data to the stator controller of an electromechanical motor can be used in other portable, e-vehicle hub motors and electro-mechanical or physical work use cases. For example, with an e-bike hub motor, the e-bike hub motor could be adapted to include this unique self-contained force sensor module system, in place of the current state of the art, having a torque sensor in the pedals and the power supply external to the electromechanical motor section. The present invention provides the capability to wirelessly communicate information to/from the spinning rotor to the stator section of the motor. This is an improvement to the prior art and saves cost on complicated mechanical slip bearings and packaging challenges. FIG.13shows an exploded view illustrating the inner workings of a DMRM utilizing the force sensor module configuration. Front rotor cover1301and back rotor cover1302are affixed to magnet rotor ring1303. Front stator cover1306and back stator cover1307are affixed to open hub stator1304. The rotors move around the axis of rotations formed at open hub stator attachment1304, utilizing slip bearing interface1305. Magnet rotor ring1303is driven by the electromagnetic forces generated within coil stator1309by motor controller1310. This combination of parts, when driven by motor controller1310, forms a motor assembly with the cover plates providing the torsional functionality desired. Although some of the exemplary embodiments described herein are tailored to a DMRM, for example open versus closed hub, the present force sensor module and methods are not limited to this configuration and can be used in other apparatus environments with similar applications and methods. As illustrated inFIG.13, the apparatus includes open hub stator attachment1304, that is sized to fit on varying types of equipment, such as Olympic or standard Barbell and Dumbbell components. A cable or strap can be attached in tangent force direction1311such that it will experience converted torsional rotation force1312applied from the motor section described within. The force sensor module configuration includes load cell ring1313floating relative to front rotor cover1301and back rotor cover1302, while providing freedom in the rotational axis. Load wedge1314is affixed to magnet rotor ring1303such that it is captured between forward load cell1315and reverse load cell1316. In this arrangement the rotation force can be transferred to the tangent force and sensed as a compression force applied to the otherwise levered sensing capability of the strain gauge-based load cells. Forward load cell1315and reverse load cell1316are wired in a half Wheatstone bridge electrical profile providing an analog input signal to load cell amp1317. Load cell amp1317converts this parameter to a weighted analog-to-digital ADC output and provides the raw sensor data to wireless device1318or slip connector for further processing, filtering, and tracking on a computer or other user devices. FIG.14shows an alternate exemplary embodiment of a front view of a force sensor module configuration. Slip bearing interface1405is formed between coil stator1409and magnet rotor ring1403with load wedge1414affixed to magnetic ring1403. This view further illustrates an alternate embodiment of adding a second forward load cell1415and second reverse load cell1416in a full Wheatstone bridge electrical schema as an alternate mount1421position, thus adding further accuracy of measurements. The resulting forces experienced at load cell ring1413when counter forces are applied in the tangent force direction1411are sensed by the compression forces experienced by the load cells. As previously described, these forces are converted by load cell amp1417and communicated by wireless device1418or slip connector interface to a computer or other user devices, for further processing, filtering, and tracking. Further accuracy can be realized with the force sensor module by utilizing positional data from the optional multi-axis sensors within wireless device1418and/or a supplemental hall effect positional sensor1420. The positional data supplements the tangent force1411tracking, forming a weighted force vector and calculating the length of cable or strap released or retracted. When combined with a clock timer, these data sources provide rate, position and force which provide complete sensing and closed loop control of dynamic forces in real-time. FIG.15shows an isometric view illustrating the inner workings of a DMRM utilizing the force senor module configuration. When the force sensor module incorporates wireless device1518, a separate rotating low-power supply1519may be used to power load cell amp1517and wireless device1518. Alternately, the devices can be powered through a slip connector, packaged within slip bearing interface1505. In both cases, the commercially available devices are very low power, with sleep and chip enabled capabilities for long life usage prior to needing replacement or charging, should batteries be used as the power supply. The higher demand electromagnetic motor characteristics described previously can be powered from a power supply, such as a battery pack, fuel cells, rechargeable power, or fixed power supply, in stator power supply area1522. The stator power supply is packaged and affixed within a closed hub or open hub stator1504and coil stator1509. [Is the power supply referenced part of the FSM or separate?] FIG.15further illustrates the force sensor module system functionality. As demonstrated in this view, load wedge1514is attached to magnet rotor ring1503. When rotating, load wedge1514applies a compression force to the levered section of forward load cell1515or reverse load cell1516depending on the tangent force1511experienced. As described previously, load cell ring1513is captured by typical front and back rotor cover1502. Load cell ring1513may also include a cable or strap collection channel1523feature for cable or strap management during extension and retraction of forces. Further accuracy of force measurement can be incorporated by adding a second load cell configuration forming a full Wheatstone bridge in alternate mount1521section of magnet rotor ring1503. FIG.16illustrates components of the force sensor module, including load cell amp1617and embedded wireless device1618with wireless communication1624method, local display1625, user device1626and processing element1627, for example, for further analysis, filtering and tracking on a computer, microprocessor, or other devices, as system node options. In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
23,156
11857844
Referring to the figures, an aquatic activity fin1according to embodiments of the invention, comprises a foot support portion2configured to be worn on a person's foot, and a fin portion3fixed to and extending from the foot support portion2. The fin portion3extends generally along a base plane P that is generally parallel to the sole of a wearer's foot, similar to a general configuration of a conventional swimming fin. The foot support portion2is illustrated very schematically and may have any per se known configurations for attaching a swimming fin to a person's foot, including for instance a shoe portion or a strap to hold a person's foot to the fin. The fin portion3comprises a support frame4and a plurality of hydrofoil blades5mounted to the support frame4via pivot couplings6. In the illustrated embodiment, the support frame4comprises a pair of side bars8on opposed lateral sides of the fin portion and receiving therebetween the hydrofoil blades5that are pivotally coupled to the support frame bars. The side bars8may for instance comprise substantially flat and linear bars, however various profiled rods, bars and other structural elements may be provided to form the support frame. In the illustrated embodiment, the side bars8of the support frame4are arranged in a substantially parallel manner such that the gap therebetween defining the width of the hydrofoil blades is substantially constant. However, within the scope of the invention, the side bars8may be non-parallel and may either diverge away from the foot portion or converge from the foot portion towards the extremities of the support frame and such that different hydrofoil blades5have different widths spanning between the side bars8. The diverging, respectively converging shapes for increasing, respectively decreasing hydrofoil blade widths may be configured to adjust the thrust produced by the hydrofoil blades as a function of the distance of the blade from the foot portion2. The side bars8may further have non-linear shapes and may further curve upwards or downwards away from the generally base plane P aligned with the wearer's foot sole, depending on the hydrodynamic properties that are desired from the fin portion3. The desired hydrodynamic properties of the fin portion may for instance depend on the type of activity intended for the fin, for instance mainly stationary, slow moving, “walking” or “running” exercises in water, or for generating maximum thrust for swimming. The return displacement of the fin with respect to the water flow direction V is illustrated inFIGS.1a,1band3a, and the thrust position with respect to the water flow direction V is illustrated inFIGS.2a,2band3b. Each hydrofoil blade5comprises a hydrofoil surface9extending from a leading edge9ato a trailing edge9b. The hydrofoil surfaces may have a symmetric or substantially symmetric profile with respect to the chord line C which may thus correspond to a mean camber line of the hydrodynamic hydrofoil profile. Within the scope of the invention it is however possible to have a non-symmetric hydrofoil profile for adjusting the relationships between the drag force Fdrag, thrust force Fthrust, and lift force Fliftof the fin moving in the return, respectively thrust directions. Such adjustment may depend on the intended application, in order to optimize a particular hydrodynamic behavior of the fin. Moreover, the blade leading edge maximum thickness D2, the blade chord length L, and the hydrofoil profile9(depending inter alia on the mean camber line) may all be varied to adjust hydrodynamic properties of the blades. The aforementioned properties may be adjusted using conventional simulation methods used for hydrofoils. In the present invention, for aquatic activities including swimming, treading in water and advancing in water using “walking” or “running” movements, advantageous dimensions of the hydrofoil blades for adults are advantageously in the ranges of:D2preferably is in a range from 2 to 4 mm,L preferably is in a range from 40 to 80 mm,L/R preferably is in a range from 2 to 20. The preferred width of the hydrofoil blades ranges from 150 to 250 mm. The pivot coupling6may comprise a pivot bearing rod or pin6aon one of the support frame and hydrofoil blade, and a complementary pivot bearing orifice6bon the other of the support frame and hydrofoil blade. The pivot coupling allows the hydrofoil blade to rotate around a pivot axis A that is arranged substantially orthogonal to the direction of the support frame4extending from the foot support portion2. In the illustrated embodiment, the pivot axis A is positioned substantially at a position where the blade has its maximum thickness D2, which is situated proximal the leading edge9a. In the illustrated embodiment, the hydrofoil blade is provided with a pin6athat is rotatable received within a pivot bearing orifice6bin the side bars8, however as indicated above, the pin may be provided on the side bars and engage in a corresponding orifice within the hydrofoil blade, or in another variant, a separate pin may be inserted into orifices in the support frame side bars and hydrofoil blade. In another variant, the pivot coupling6may be an elastically deformable coupling interconnecting the hydrofoil blade and the support frame. The fin portion3further comprises bidirectional angle stops7that define a maximum rotation angle β of the blade. In preferred embodiments, the maximum blade pivot angle β is in a range of 60 to 120 degrees preferably in a range of 70 to 120 degrees preferably in a range of 80 to 115 degrees. The bidirectional angle stops are configured such that, in the thrust position as illustrated inFIGS.2a,2band3b, the hydrofoil blade5trailing edge9bis positioned above the base plane P, whereas in the return position as illustrated inFIGS.1a,1band3a, the blade trailing edge9bis positioned below the base plane P. The bidirectional angle stops7may advantageously be arranged to stop the hydrofoil blade5in:a thrust position such that an angle γ of the chord line C of the hydrofoil blade with respect to the base plane P is in a range of 3 to 30 degrees, preferably in a range of 5 to 20 degrees.a return position such that the angle γ of chord line C of the hydrofoil blade with respect to the base plane P is in a range of 70 to 90 degrees, preferably in a range of 75 to 85 degrees. The plurality of blades5arranged in a juxtaposed manner substantially along the base plane P, are spaced apart at a distance D greater than the chord length L of the blade (D>L) such that the blades can pivot across the base plane P without interfering with each other. An important advantage of this characteristic is that, in the maximum thrust position as illustrated inFIGS.2a,2band3b, the water flow over the hydrofoil blade5produces not only a thrust force component Fthrustbut also a lift force component Fliftthat allows a swimmer to efficiently remain in a stationary or in a slowly advancing or retreating movement in water by treading water or performing a walking or running movement in water in an efficient manner. Typically, with respect to the surface of the water (i.e. a horizontal plane), the base plane of the fin during a treading or walking movement may be at an angle for example of between 0 and 90 degrees, whereby the resultant force from the thrust force Fthrustand lift force Fliftacts upon the wearer in a direction that may be vertical, or close to the vertical depending on whether a slow advancing movement or a slow reversing movement in water is desired. This movement can thus easily and efficiently adjust a stationary treading or advancing walking movement in water with easy and natural movements of the swimmer's limbs. Moreover, on the return movement, the hydrofoil blades5are further configured to produce a lift force Fliftto lift upwards the swimmer. The fins according to embodiments of the invention can thus be used to allow the swimmer to perform a movement that is similar to a walking or running movement on land. Such walking in water movements may be used in physiotherapy and other forms of exercise training or may be used in water sports. A running or walking movement practiced in water without the present invention does not allow the swimmer to float or advance due to the symmetrical back and forth forces of oppositely moving legs. The fins according to embodiments of the invention however allow a swimmer both to advance and to rise in the water by practicing the movements of walking or running, due to the rotation of the hydrofoil blades induced by the movement of the fin, and the control of this rotation obtained with the bidirectional stops. The bidirectional angle stops7may be formed in various manners, comprising an abutment shoulder on one of the hydrofoil blade and support frame, and a corresponding guide slot on the other of the hydrofoil blade and support frame. The guide slot provides abutment shoulders at its ends to define the maximum angle of rotation β of the blade5. For instance, in the illustrated example, the bidirectional angle stops7comprise a slot10formed within the side bars8of the support frame4, and a pin11received within the slot10, the pin movable between a first end10aof the slot and a second end10bof the slot, defining the maximum angle of rotation β. The skilled person would however appreciate that various other configurations defining the stops and maximum angle of rotation of the blade with respect to the support frame may be implemented. It will also be appreciated that the number of juxtaposed hydrofoil blades5may be varied, for instance having a greater number of hydrofoil blades with shorter chord length or a lower number of blades each with an increased chord length, depending also on the overall length of the fin portion3. Moreover, within the scope of the invention, it may be appreciated that there may be more than a single hydrofoil blade extending across the width of the fin portion, for instance the support frame may comprise a central support bar such that there are a pair of blades aligned along the pivot axis. However, for an optimal cost to performance ratio, the number of hydrofoil blades is preferably in a range of 3 to 6, for instance 4 or 5. LIST OF REFERENCES USED Aquatic activity fin1Foot support portion2(sock, strap, etc)Fin portion3support frame4side bars8hydrofoil blade5hydrofoil surface9leading edge9aleading edge radius R1trailing edge9bpivot coupling6pivot bearing rod/pin6apivot bearing orifice6bpivot axis Abidirectional angle stops7slot10first and second ends10a,10bpin11Water Flow Direction Vbase plane P(maximum) blade pivot angle βangle of incidence αblade chord length LAxis to leading edge L1Axis to trailing edge L2 (L1+L2=L)Blade leading edge maximum thickness D2(diameter D2)distance between pivots Dpivot to stop distance R3(radius R3)Lift force FliftDrag force FdragThrust force Fdraghydrofoil profilechord line Cmean camber line
10,989
11857845
DETAILED DESCRIPTION OF THE INVENTION The primary advantages of functionalized aluminosilicate used in golf ball applications are as follows: increase the mixing efficiency; shortens the cycle time for processing; generates better dispersion of core ingredients; increases the toughness and tear resistance; and improves the impact durability. Golf ball core, mantle, and cover layer(s) of golf ball preferably have a composition comprising: aluminosilicate microspheres whose average diameter is less than 50 μm functionalized with, but not limited to, polysulfide, vinyl, amino, epoxy, hydroxyl, carboxyl, methacryloyl, hydrocarbon, mercapto and isocyanate. The aluminosilicate microspheres preferably have an average diameter less than 50 μm and are functionalized with a polysulfide, a vinyl, an amino, an epoxy, a hydroxyl, a carboxyl, a methacryloyl, a hydrocarbon, a mercapto or an isocyanate. The functionalized aluminosilicate microspheres composition is preferably blended with 1,4 polybutadiene and is not more than 20 phr based on 100 phr of polybutadiene rubber. The blend preferably contains free radical formers such as sulfur, azo compound, organic peroxide or combination of those with or without the presence of co-crosslinking agent, such as ZnO and ZD(M)A. The compositions can further contain peptizers, accelerators, inhibitors, activators, colorants, foaming agents, and organic, inorganic or metallic fillers or fibers including graphene and nanotube. The mixing methods include two-roll mill, banbury mixer, or extruder. The composition can be crosslinked by any conventional crosslinking method(s), such as by applying thermal energy, irradiation, and combination of those. The afore-mentioned blend alternatively contains renewable and bio-based fillers have particle size of less than 40 μm. The examples of renewable fillers are, but not limited to, eggshells, carbon fly ash, processing tomato peels, and guayule bagasse. A more through description of renewables is disclosed in Jeon, U.S. patent application Ser. No. 16/717,797, filed on Dec. 17, 2019, for Renewable Fillers For Golf Ball Applications, which is hereby incorporated by reference in its entirety. A masterbatch form of functionalized aluminosilicate microspheres composition is alternatively used for a rubber mixing process. The functionalized aluminosilicate microspheres composition and/or the blend are alternatively further blended with non- or partially neutralized copolymeric or terpolymeric ionomer(s) to form a functionalized aluminosilicate microspheres modified resin. The functionalized aluminosilicate microspheres modified resin can be further neutralized using various types of metal cations. The examples of metal cations are, not limited to, acetate, oxide, or hydroxide salts of lithium, calcium, zinc, sodium, potassium, magnesium, nickel, manganese, or mixtures thereof. In yet another alternative embodiment, the functionalized aluminosilicate microspheres composition and/or the functionalized aluminosilicate microspheres modified resin are blended with highly neutralized ionomer. In yet another alternative embodiment, the functionalized aluminosilicate microspheres composition and/or the functionalized aluminosilicate microspheres modified resin are blended with non-ionomeric polymer(s) such as, but not limited to, thermoplastic elastomer, thermoplastic polyester, polyamide, polyamide copolymer, liquid crystalline polymer, dynamically vulcanized thermoplastic elastomers, polyetherester elastomers, polyesterester elastomers, polyetheramide elastomers, propylene-butadiene copolymers, modified copolymers of ethylene and propylene, styrenic copolymers including styrenic block copolymers and randomly distributed styrenic copolymers such as styrene-isobutylene copolymers, ethylene-vinyl acetate copolymers (EVA), 1,2-polybutadiene, and styrene-butadiene copolymers, hydrogenated styrene-butadiene copolymers, polyether or polyester thermoplastic urethanes In yet another embodiment, any of the afore-mentioned materials can further comprise colorants, UV-stabilizer, anti-oxidant, fluorescent-whitening agent, processing aids, organic, inorganic, or metallic fillers and fibers, and mold-release. Any of the afore-mentioned materials can be prepared by melt mixing. Examples of melt-mixing are roll-mill, internal mixer, single-screw extruder, twin-screw extruder, or any combination of those. FIGS.1,3,4and5illustrate a five piece golf ball10comprising an inner core12a, an outer core12b, an inner mantle14a, an outer mantle14b, and a cover16, wherein any of the layers comprises a functionalized aluminosilicate microspheres composition and/or a functionalized aluminosilicate microspheres modified resin. FIG.5Aillustrates a five piece golf ball10comprising an inner core12a, an intermediate core12b, an outer core12c, a mantle14, and a cover16, wherein any of the layers comprises a functionalized aluminosilicate microspheres composition and/or a functionalized aluminosilicate microspheres modified resin. FIGS.8and9illustrate a six piece golf ball10comprising an inner core12a, an intermediate core12b, an outer core12c, an inner mantle14a, an outer mantle14b, and a cover16, wherein any of the layers comprises a functionalized aluminosilicate microspheres composition and/or a functionalized aluminosilicate microspheres modified resin. FIG.10illustrates a four-piece golf ball comprising a dual core, a mantle layer and a cover, wherein any of the layers comprises a functionalized aluminosilicate microspheres composition and/or a functionalized aluminosilicate microspheres modified resin. FIG.11illustrates a three piece golf ball comprising a core, a mantle layer and a cover, wherein any of the layers comprises a functionalized aluminosilicate microspheres composition and/or a functionalized aluminosilicate microspheres modified resin. The mantle component is preferably composed of the inner mantle layer and the outer mantle layer. The mantle component preferably has a thickness ranging from 0.05 inch to 0.15 inch, and more preferably from 0.06 inch to 0.08 inch. The outer mantle layer is preferably composed of a blend of ionomers and functionalized aluminosilicate microspheres modified resin. One preferred embodiment comprises SURLYN 9150 material, SURLYN 8940 material, a SURLYN AD1022 material, and a masterbatch. SURLYN 8320, from DuPont, is a very-low modulus ethylene/methacrylic acid copolymer with partial neutralization of the acid groups with sodium ions. SURLYN 8945, also from DuPont, is a high acid ethylene/methacrylic acid copolymer with partial neutralization of the acid groups with sodium ions. SURLYN 9945, also from DuPont, is a high acid ethylene/methacrylic acid copolymer with partial neutralization of the acid groups with zinc ions. SURLYN 8940, also from DuPont, is an ethylene/methacrylic acid copolymer with partial neutralization of the acid groups with sodium ions. The inner mantle layer is preferably composed of a blend of ionomers, preferably comprising a terpolymer and at least two high acid (greater than 18 weight percent) ionomers neutralized with sodium, zinc, magnesium, or other metal ions. The material for the inner mantle layer preferably has a Shore D plaque hardness ranging preferably from 35 to 77, more preferably from 36 to 44, a most preferably approximately 40. The thickness of the outer mantle layer preferably ranges from 0.025 inch to 0.050 inch, and is more preferably approximately 0.037 inch. The mass of an insert including the dual core and the inner mantle layer preferably ranges from 32 grams to 40 grams, more preferably from 34 to 38 grams, and is most preferably approximately 36 grams. The inner mantle layer is alternatively composed of a HPF material available from DuPont. Alternatively, the inner mantle layer14bis composed of a material such as disclosed in Kennedy, III et al., U U.S. Pat. No. 7,361,101 for a Golf Ball And Thermoplastic Material, which is hereby incorporated by reference in its entirety. The outer mantle layer is preferably composed of a blend of ionomers and a functionalized aluminosilicate microspheres modified resin. The blend also preferably includes a masterbatch. The material of the outer mantle layer preferably has a Shore D plaque hardness ranging preferably from 55 to 75, more preferably from 65 to 71, and most preferably approximately 67. The thickness of the outer mantle layer preferably ranges from 0.025 inch to 0.040 inch, and is more preferably approximately 0.030 inch. The mass of the entire insert including the core, the inner mantle layer and the outer mantle layer preferably ranges from 38 grams to 43 grams, more preferably from 39 to 41 grams, and is most preferably approximately 41 grams. In an alternative embodiment, the inner mantle layer is preferably composed of a blend of ionomers, preferably comprising at least two high acid (greater than 18 weight percent) ionomers neutralized with sodium, zinc, or other metal ions. The blend of ionomers also preferably includes a masterbatch. In this embodiment, the material of the inner mantle layer has a Shore D plaque hardness ranging preferably from 55 to 75, more preferably from 65 to 71, and most preferably approximately 67. The thickness of the outer mantle layer preferably ranges from 0.025 inch to 0.040 inch, and is more preferably approximately 0.030 inch. Also in this embodiment, the outer mantle layer is preferably composed of a blend of ionomers and methyl methacrylate, butadiene, and styrene (MBS) with a weight percentage of MBS ranging from 5 to 15 weight percent of the mantle layer. In this embodiment, the material for the outer mantle layer14bpreferably has a Shore D plaque hardness ranging preferably from 35 to 77, more preferably from 36 to 44, a most preferably approximately 40. The thickness of the outer mantle layer preferably ranges from 0.025 inch to 0.100 inch, and more preferably ranges from 0.070 inch to 0.090 inch. In yet another embodiment wherein the inner mantle layer is thicker than the outer mantle layer and the outer mantle layer is harder than the inner mantle layer, the inner mantle layer is composed of a blend of ionomers, preferably comprising a terpolymer and at least two high acid (greater than 18 weight percent) ionomers neutralized with sodium, zinc, magnesium, or other metal ions. In this embodiment, the material for the inner mantle layer has a Shore D plaque hardness ranging preferably from 30 to 77, more preferably from 30 to 50, and most preferably approximately 40. In this embodiment, the material for the outer mantle layer has a Shore D plaque hardness ranging preferably from 40 to 77, more preferably from 50 to 71, and most preferably approximately 67. In this embodiment, the thickness of the inner mantle layer preferably ranges from 0.030 inch to 0.090 inch, and the thickness of the outer mantle layer ranges from 0.025 inch to inch. Preferably the inner core has a diameter ranging from 0.75 inch to 1.20 inches, more preferably from 0.85 inch to 1.05 inch, and most preferably approximately 0.95 inch. Preferably the inner core12ahas a Shore D hardness ranging from 20 to 50, more preferably from 25 to 40, and most preferably approximately 35. Preferably the inner core has a mass ranging from 5 grams to 15 grams, 7 grams to 10 grams and most preferably approximately 8 grams. Preferably the outer core has a diameter ranging from 1.25 inch to 1.55 inches, more preferably from 1.40 inch to 1.5 inch, and most preferably approximately 1.5 inch. Preferably the outer core has a Shore D surface hardness ranging from 40 to 65, more preferably from 50 to 60, and most preferably approximately 56. Preferably the outer core is formed from a polybutadiene, zinc diacrylate, zinc oxide, zinc stearate, a peptizer and peroxide. Preferably the combined inner core and outer core have a mass ranging from 25 grams to 35 grams, 30 grams to 34 grams and most preferably approximately 32 grams. Preferably the inner core has a deflection of at least 0.230 inch under a load of 220 pounds, and the core has a deflection of at least 0.080 inch under a load of 200 pounds. As shown inFIGS.6and7, a mass50is loaded onto an inner core and a core. As shown inFIGS.6and7, the mass is 100 kilograms, approximately 220 pounds. Under a load of 100 kilograms, the inner core preferably has a deflection from 0.230 inch to inch. Under a load of 100 kilograms, preferably the core has a deflection of 0.08 inch to 0.150 inch. Alternatively, the load is 200 pounds (approximately 90 kilograms), and the deflection of the core12is at least 0.080 inch. Further, a compressive deformation from a beginning load of 10 kilograms to an ending load of 130 kilograms for the inner core ranges from 4 millimeters to 7 millimeters and more preferably from 5 millimeters to 6.5 millimeters. The dual core deflection differential allows for low spin off the tee to provide greater distance, and high spin on approach shots. In an alternative embodiment of the golf ball shown inFIG.5A, the golf ball10comprises an inner core12a, an intermediate core12b, an outer core12b, a mantle14and a cover16. The golf ball10preferably has a diameter of at least 1.68 inches, a mass ranging from 45 grams to 47 grams, a COR of at least 0.79, a deformation under a 100 kilogram loading of at least 0.07 mm. In one embodiment, the golf ball comprises a core, a mantle layer and a cover layer. The core comprises an inner core sphere, an intermediate core layer and an outer core layer. The inner core sphere has a diameter ranging from 0.875 inch to 1.4 inches. The intermediate core layer is composed of a highly neutralized ionomer and has a Shore D hardness less than 40. The outer core layer is composed of a highly neutralized ionomer and has a Shore D hardness less than 45. A thickness of the intermediate core layer is greater than a thickness of the outer core layer. The mantle layer is disposed over the core, comprises an ionomer material and has a Shore D hardness greater than 55. The cover layer is disposed over the mantle layer comprises a thermoplastic polyurethane material and has a Shore A hardness less than 100. The golf ball has a diameter of at least 1.68 inches. The mantle layer is harder than the outer core layer, the outer core layer is harder than the intermediate core layer, the intermediate core layer is harder than the inner core sphere, and the cover layer is softer than the mantle layer. In another embodiment, shown inFIGS.8and9, the golf ball10has a multi-layer core and multi-layer mantle. The golf ball includes a core, a mantle component and a cover layer. The core comprises an inner core sphere, an intermediate core layer and an outer core layer. The intermediate core layer is composed of a highly neutralized ionomer and has a Shore D hardness less than 40. The outer core layer is composed of a highly neutralized ionomer and has a Shore D hardness less than 45. A thickness of the intermediate core layer is greater than a thickness of the outer core layer12c. The inner mantle layer is disposed over the core, comprises an ionomer material and has a Shore D hardness greater than 55. The outer mantle layer is disposed over the inner mantle layer, comprises an ionomer material and has a Shore D hardness greater than 60. The cover layer is disposed over the mantle component, comprises a thermoplastic polyurethane material and has a Shore A hardness less than 100. The golf ball has a diameter of at least 1.68 inches. The outer mantle layer is harder than the inner mantle layer, the inner mantle layer is harder than the outer core layer, the outer core layer is harder than the intermediate core layer, the intermediate core layer is harder than the inner core sphere, and the cover layer is softer than the outer mantle layer. EXAMPLES Polybutadiene based cores were made using following materials. Corresponding levels (by % wt) is mentioned next to each material: Polybutadiene with more than 60% 1,4-cis structure-(40-900; Polyisoprene-(1-30%); Zinc diacrylate-(10-50%); Zinc oxide-(1-30%); Zinc stearate-(1-20%); Peroxide initiator-(0.1-10%); Zinc pentachlorothiophenol-(0-10%); Color-(0-10%); Barium sulfate-(0-20%). Dual cores were made by compression molding two outer core halves around an already molded inner core having a diameter of approximately 0.940″ and a soft compression of approximately 0.200 inches of deflection under a 200 lb load. Curing of the outer core was done at temperatures ranging between 150-400 F for times ranging from 1-30 minutes. After molding, the dual cores were spherically ground to approximately 1.554″ prior to testing. Table 1 and 2 give details of recipe of inner and outer cores. Components from these recipes were mixed in an internal mixer. Optionally, additional mixing was done using a two roll mill. Compression of the outer core is measured by first making a full size core separately, measuring its compression, and then molding two halves around the inner core to complete the dual core. Compression differential describes the difference between the outer core compression (as molded independently) and inner core compression. A higher compression differential is more susceptible to crack durability upon impact. Table One-Inner Core Formula TABLE ONEInnter Core Formula% wtComponentsPolybutadiene rubber69.2Polyisoprene rubber0.0Zinc diacrylate14.8Zinc oxide12.2Zinc stearate2.1Peroxide initiator1.0Zinc pentachlorothiophenol0.6Color0.1Barium sulfate0.0PropertiesCompression0.222 Table Two-Outer Core Formula TABLE TWOOuter Core FormulaFormula% wtComponentsPolybutadiene rubber62.6Polyisoprene rubber0.0Zinc diacrylate19.9Zinc oxide6.3Zinc stearate3.8Peroxide initiator0.5Zinc pentachlorothiophenol0.6Color0.3Barium sulfate6.4Properties of outer coreCompressionCOR (coefficient of restitution)0.800Properties of dual core built from inner and outer coreCompression47.7COR (coefficient of restitution @125 fps)0.789 Compression is measured by applying a 200 pound load to the core and measuring its deflection, in inches. Compression=180−(deflection*1000). Mantles were molded on top of dual cores using injection molding process. Mantles were made of polyethylene ionomers sold under the trade name Surlyn by DuPont. MBS modified surlyn can be made by physically blending mixture of MBS and Surlyn or extruding mixture of MBS and Surlyn. Twin screw can be used for extrusion process. Thickness of mantle can vary from 0.010 to 0.050 inches. Table Three-Mantle Layer Formula TABLE THREEMantle Layer FormulaControlDescriptionmantleFormula 1Group noP50813P50814Surlyn 1 (%)98.1Surlyn 2 (%)45.540.95Surlyn 3 (%)45.540.95MBS (%)010Compression6464COR (coefficient of0.8050.803restitution @175 fps)Durability score or mean22.842.4time to fail MTTF (numberof shots after which ballstarts to crack/fail) Compression is measured by applying a 200 pound load to the core and measuring its deflection, in inches. Compression=180−(deflection*1000). Durability testing of the mantle layers. Mantles were shot at 175 fps in a pneumatic testing machine (PTM). For each formula mentioned in Table 3, 12 mantles were tested. Number of shots after which each core cracked was recorded for each core, and the cracked core was removed from the remainder of the test. The data was reported using a Weibull plot, and the mean time to failure was reported as shown in Table 3. As seen inFIG.12, MBS modified mantles endured more shots before failure compared to mantles with no MBS. It is reasonable to assume that the durability of a golf ball having a dual core of this design will also experience a dramatic increase in crack durability based on this improvement to the dual core. Thermoplastic polyurethane (TPU) cover was injection molded on top of mantles. Balls with TPU cover were then painted using polyurethane coatings. Polyurethane coating was heat cured at high temperature for few minutes. Thickness of cover can vary from 0.005 to 0.050 inches. In a particularly preferred embodiment of the invention, the golf ball preferably has an aerodynamic pattern such as disclosed in Simonds et al., U.S. Pat. No. 7,419,443 for a Low Volume Cover For A Golf Ball, which is hereby incorporated by reference in its entirety. Alternatively, the golf ball has an aerodynamic pattern such as disclosed in Simonds et al., U.S. Pat. No. 7,338,392 for An Aerodynamic Surface Geometry For A Golf Ball, which is hereby incorporated by reference in its entirety. Various aspects of the present invention golf balls have been described in terms of certain tests or measuring procedures. These are described in greater detail as follows. As used herein, “Shore D hardness” of the golf ball layers is measured generally in accordance with ASTM D-2240 type D, except the measurements may be made on the curved surface of a component of the golf ball, rather than on a plaque. If measured on the ball, the measurement will indicate that the measurement was made on the ball. In referring to a hardness of a material of a layer of the golf ball, the measurement will be made on a plaque in accordance with ASTM D-2240. Furthermore, the Shore D hardness of the cover is measured while the cover remains over the mantles and cores. When a hardness measurement is made on the golf ball, the Shore D hardness is preferably measured at a land area of the cover. As used herein, “Shore A hardness” of a cover is measured generally in accordance with ASTM D-2240 type A, except the measurements may be made on the curved surface of a component of the golf ball, rather than on a plaque. If measured on the ball, the measurement will indicate that the measurement was made on the ball. In referring to a hardness of a material of a layer of the golf ball, the measurement will be made on a plaque in accordance with ASTM D-2240. Furthermore, the Shore A hardness of the cover is measured while the cover remains over the mantles and cores. When a hardness measurement is made on the golf ball, Shore A hardness is preferably measured at a land area of the cover The resilience or coefficient of restitution (COR) of a golf ball is the constant “e,” which is the ratio of the relative velocity of an elastic sphere after direct impact to that before impact. As a result, the COR (“e”) can vary from 0 to 1, with 1 being equivalent to a perfectly or completely elastic collision and 0 being equivalent to a perfectly or completely inelastic collision. COR, along with additional factors such as club head speed, club head mass, ball weight, ball size and density, spin rate, angle of trajectory and surface configuration as well as environmental conditions (e.g. temperature, moisture, atmospheric pressure, wind, etc.) generally determine the distance a ball will travel when hit. Along this line, the distance a golf ball will travel under controlled environmental conditions is a function of the speed and mass of the club and size, density and resilience (COR) of the ball and other factors. The initial velocity of the club, the mass of the club and the angle of the ball's departure are essentially provided by the golfer upon striking. Since club head speed, club head mass, the angle of trajectory and environmental conditions are not determinants controllable by golf ball producers and the ball size and weight are set by the U.S.G.A., these are not factors of concern among golf ball manufacturers. The factors or determinants of interest with respect to improved distance are generally the COR and the surface configuration of the ball. The coefficient of restitution is the ratio of the outgoing velocity to the incoming velocity. In the examples of this application, the coefficient of restitution of a golf ball was measured by propelling a ball horizontally at a speed of 125+/−5 feet per second (fps) and corrected to 125 fps against a generally vertical, hard, flat steel plate and measuring the ball's incoming and outgoing velocity electronically. Speeds were measured with a pair of ballistic screens, which provide a timing pulse when an object passes through them. The screens were separated by 36 inches and are located 25.25 inches and 61.25 inches from the rebound wall. The ball speed was measured by timing the pulses from screen 1 to screen 2 on the way into the rebound wall (as the average speed of the ball over 36 inches), and then the exit speed was timed from screen 2 to screen 1 over the same distance. The rebound wall was tilted 2 degrees from a vertical plane to allow the ball to rebound slightly downward in order to miss the edge of the cannon that fired it. The rebound wall is solid steel. As indicated above, the incoming speed should be 125±5 fps but corrected to 125 fps. The correlation between COR and forward or incoming speed has been studied and a correction has been made over the ±5 fps range so that the COR is reported as if the ball had an incoming speed of exactly 125.0 fps. The measurements for deflection, compression, hardness, and the like are preferably performed on a finished golf ball as opposed to performing the measurement on each layer during manufacturing. Preferably, in a five layer golf ball comprising an inner core, an outer core, an inner mantle layer, an outer mantle layer and a cover, the hardness/compression of layers involve an inner core with the greatest deflection (lowest hardness), an outer core (combined with the inner core) with a deflection less than the inner core, an inner mantle layer with a hardness less than the hardness of the combined outer core and inner core, an outer mantle layer with the hardness layer of the golf ball, and a cover with a hardness less than the hardness of the outer mantle layer. These measurements are preferably made on a finished golf ball that has been torn down for the measurements. Preferably the inner mantle layer is thicker than the outer mantle layer or the cover layer. The dual core and dual mantle golf ball creates an optimized velocity-initial velocity ratio (Vi/IV), and allows for spin manipulation. The dual core provides for increased core compression differential resulting in a high spin for short game shots and a low spin for driver shots. A discussion of the USGA initial velocity test is disclosed in Yagley et al., U.S. Pat. No. 6,595,872 for a Golf Ball With High Coefficient Of Restitution, which is hereby incorporated by reference in its entirety. Another example is Bartels et al., U U.S. Pat. No. 6,648,775 for a Golf Ball With High Coefficient Of Restitution, which is hereby incorporated by reference in its entirety. Preferably the inner mantle layer is thicker than the outer mantle layer or the cover layer. The dual core and dual mantle golf ball creates an optimized velocity-initial velocity ratio (Vi/IV), and allows for spin manipulation. The dual core provides for increased core compression differential resulting in a high spin for short game shots and a low spin for driver shots. A discussion of the USGA initial velocity test is disclosed in Yagley et al., U.S. Pat. No. 6,595,872 for a Golf Ball With High Coefficient Of Restitution, which is hereby incorporated by reference in its entirety. Another example is Bartels et al., U.S. Pat. No. 6,648,775 for a Golf Ball With High Coefficient Of Restitution, which is hereby incorporated by reference in its entirety. Alternatively, the cover16is composed of a thermoplastic polyurethane/polyurea material. One example is disclosed in U.S. Pat. No. 7,367,903 for a Golf Ball, which is hereby incorporated by reference in its entirety. Another example is Melanson, U.S. Pat. No. 7,641,841, which is hereby incorporated by reference in its entirety. Another example is Melanson et al, U.S. Pat. No. 7,842,211, which is hereby incorporated by reference in its entirety. Another example is Matroni et al., U.S. Pat. No. 7,867,111, which is hereby incorporated by reference in its entirety. Another example is Dewanjee et al., U.S. Pat. No. 7,785,522, which is hereby incorporated by reference in its entirety. Bartels, U.S. Pat. No. 9,278,260, for a Low Compression Three-Piece Golf Ball With An Aerodynamic Drag Rise At High Speeds, is hereby incorporated by reference in its entirety. Chavan et al, U.S. Pat. No. 9,789,366, for a Graphene Core For A Golf Ball, is hereby incorporated by reference in its entirety. Chavan et al, U.S. patent application Ser. No. 15/705,011, filed on Sep. 14, 2017, for a Graphene Core For A Golf Ball, is hereby incorporated by reference in its entirety. Chavan et al, U.S. patent application Ser. No. 15/729,231, filed on Oct. 10, 2017, for a Graphene And Nanotube Reinforced Golf Ball, is hereby incorporated by reference in its entirety. From the foregoing it is believed that those skilled in the pertinent art will recognize the meritorious advancement of this invention and will readily understand that while the present invention has been described in association with a preferred embodiment thereof, and other embodiments illustrated in the accompanying drawings, numerous changes, modifications and substitutions of equivalents may be made therein without departing from the spirit and scope of this invention which is intended to be unlimited by the foregoing except as may appear in the following appended claims. Therefore, the embodiments of the invention in which an exclusive property or privilege is claimed are defined in the following appended claims.
29,525
11857846
DESCRIPTION OF THE INVENTION In the following detailed descriptions of the training ball, reference is made to the accompanying drawing that form a part hereof, and in which are shown, by way of illustration, specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structure changes may be made or other method steps and sequence thereof may be used without departing from the scope of the present invention. The training ball is herein described as used in sporting, baseball and softball training environments. The ball can have uses in other environments when recreational activity in confined areas is desired. Referring toFIGS.1to5, there is shown a stitched leather baseball-like ball indicated generally at10useable for baseball training and playing modified baseball games in confined areas with minimal players and equipment. Ball10has an outer skin, shell or cover11of synthetic leather to provide a leather-like finish, appearance and feel. Cover11can also be made of cowhide leather and other artificial leathers as desired. Ball10has a diameter between 2⅜ and 3 inches and a circumference between 9 and 9¼ inches in conformance with current Major League Baseball rule requirements for ball size. The weight of ball10is substantially less than the Major League Baseball ball weight requirement of 5 ounces to 5¼ ounces. The relatively lightweight of ball10allows ball10to be easily thrown and curved, and to leave a bat with less velocity and fly shorter distances when hit, suitable for playing modified versions of the game of baseball, backyard baseball, and practicing or training for baseball. Preferably, ball10has a weight between 1¼ ounces and 1% ounces. Ball10can have other sizes and weights as desired, such as the size of an 11-inch or 12-inch regulation softball and a weight substantially less than the weight of a standard softball of 6¼ ounces to 7 ounces. Cover11is formed from two figure-8 shaped sections12and13. Outer edges14and16of cover sections12and13have a plurality of holes17and18for receiving stitches19. Ball10has a plurality of hand-stitched double stitches19, such as ninety-six double stitches, in cover11. Preferably, cover11is a synthetic polyurethane leather covering coated with white coloring and includes a textured outer finish to simulate a standard baseball. Cover11can also be made to have a covering coated with yellow coloring to simulate a standard softball. The synthetic polyurethane leather covering material of cover11increases durability of ball10, and resists dirt and water allowing cover11to maintain its color and shape, and is not cost prohibitive as genuine leather covering materials tend to be. Figure-8 shaped sections12and13of cover11are hand-stitched together with stitches19to form raised seams21. Stitches19are preferably red stitching made of 100% cotton. Other materials with various colors can be used to make stitches19. For example, stitches19can be linen string stitches and can have black, blue or monochrome coloring to match the color of cover11. Raised seams21are elevated off the surface of ball10. As such, seams21grip air currents as ball10flys causing ball10to swerve to the right side, to the left side, downward, upward, or knuckle, or a combination thereof. Whether ball10moves sharply or gradually depends on the direction and speed ball10is thrown and how raised21seams have made to spin by a pitcher. The height of seams21also affects the type and amount of movement of ball10pitched by the pitcher to a batter. Seams21of ball10can be raised higher for use by beginner pitchers, recreational play and instructional purposes to facilitate exaggerated movement of ball10when pitched. As shown inFIGS.6to10, ball10has a lightweight resilient plastic round inner core22having a hollow generally round center cavity23. Core22is a polyethylene hollow spherical shaped member that absorbs almost no moisture. Core22has a spherical shell26adapted to absorb energy when it is compressed and deformed elastically and release compression energy upon unloading. Shell26has an outer spherical surface27having a middle seam29. A hole31extends through seam29and is in communication with center cavity23to allow air to move into and out of center cavity23when shell26is compressed, such as when ball10is batted with a bat. Shell26has a rounded concave curved inner surface28opposite outer spherical surface27that surrounds center cavity23. Core22is preferably formed by using a blow molding process whereby the thickness of shell26is varied and tapered, such as between 1 millimeter and 4 millimeters, to allow compression energy to be selectively released and thereby control the flight of ball10and the distance ball10travels when hit or thrown. As seen inFIGS.9and10, the thickness of shell26tapers and is greater along seam29than the thickness of shell26outwardly from seam29. Blow molding core22minimizes soft spots in shell26and ensures a uniform spherical outer surface of shell26. Core22can also be made whereby shell26has a uniform thickness, such as by using an injection molding process to make shell26have a thickness that is uniform. Center cavity23of core22and the polyethylene plastic material of core shell26provides ball10with its relatively light weight and resiliency. Synthetic leather cover11is stitched onto core22to enclose core22whereby ball10has the look and feel similar to an official Major League Baseball ball or a standard softball. Preferably, a finished ball10weighs 1¼ to 1% ounces and measures not less than 9 inches and not more than 9¼ inches in circumference so as to simulate a standard baseball but having a noticeably lighter weight than a standard baseball. The hollow centered core22reduces the overall weight of ball10whereby ball10is relatively lightweight and easily thrown and curved, and when hit travels less distance. Ball10can have other weights and measures as desired. For example, ball10can made to measure in circumference substantially similar to the softball ball sizes mandated by United States Specialty Sport Association and other softball organizations and have weights substantially less than a standard softball. A thin layer of glue or adhesive24is applied to and coats the outer spherical surface27of shell26before sewing on cover sections12and13together to enclose core22. Adhesive24is a commercial grade adhesive that commercially adheres the stitched two figure-8 shaped sections12and13of cover11to the outer spherical surface27of shell26. A thin layer of latex25similar to a balloon can also be placed or wrapped around core22whereby core22is covered with layer of latex25prior to stitching sections12and13together to enclose core22. In use, center cavity23and the varied thickness of shell26of core22impacts the performance of ball10causing ball10to travel less when hit. Raised seams21and the lightweight of ball10affects movement of ball10as seams21grip air currents causing ball10to swerve to the right side, to the left side, downward, upward, or knuckle, or a combination thereof as ball10travels from a pitcher to a batter. Ball10moves sharply or gradually depending on the direction and speed ball10is thrown and how raised seams21have been made to spin by the pitcher. The hand-stitched synthetic polyurethane leather covering material of cover11provides ball10with the look and feel of an official baseball, increases durability of ball10, and resists dirt and water allowing cover11to maintain its color and shape. The baseball like training balls illustrated and described include several embodiments of the invention. Variations and modifications of the ball and ball materials can be made by a person skilled in the art without departing from the invention.
7,796
11857847
DETAILED DESCRIPTION OF THE INVENTION Golf balls often include printed markings at various locations on the surface. There are several printing methods for applying the markings, including pad printing and laser jet printing, for example. In pad printing, ink is deposited onto a plate and arranged in a pattern corresponding to the markings to be made on the golf ball. A pad contacts the plate and thereby receives the ink on the pad surface. The ink is then transferred from the pad to the golf ball by pressing the inked pad onto the golf ball to produce a stamp. A “stamp” or “marking,” as used herein, refers to the printed area produced by application of an ink-carrying pad to a surface of an item, such as a golf ball. A “single stamp” or “single marking” refers a printed area produced by only one application of an ink-carrying pad onto the item. Pad printing is an indirect intaglio process. Depressions are created in a flat block called “the plate” or pad printing cliche. The depressions are filled with ink and a smooth, resilient stamp block of silicone rubber takes up ink from the plate and transfers it to the golf ball. A “etching pattern,” as used herein, refers to the wells and/or depressions in a printing plate arranged in a pattern corresponding to a desired marking to be ultimately printed on an item. In some embodiments, a pad printing process begins by spreading ink across the surface of a plate using a spatula. The ink is then scraped back into the ink reservoir using a doctor blade which leaves ink in the depressions on the plate. Thinner evaporates from the ink lying in these depressions and the ink surface becomes tacky. As the pad passes over the depressions, ink will stick to the pad. As the pad lifts, it takes with it not only the tacky, adhering film, but also some of the more fluid ink underneath. This film of ink is carried to the target area on the dimpled golf ball surface. On the way, more of the thinner evaporates from the exposed, surface of the ink on the silicone pad, and the ink surface facing away from the pad becomes tacky. As the pad is applied to the golf ball, the film of ink sticks to the ball surface, and separates from the pad as it is raised FIG.1is a diagram of an exemplary pad printing process. The pad printing process includes a pad10, a printing plate12, and a golf ball14. The pad printing process generally includes an etching pattern16formed in the printing plate12. The etching pattern16may correspond to a stamp or marking18to be ultimately printed on the golf ball14. The etching pattern16may include depressions or wells formed in a surface of the printing plate12and a selected ink may fill the wells. The depressions or wells may have an etch depth, which may vary throughout the etching pattern16. In a first shown step, the pad10may be arranged above the etching pattern16on the printing plate12. The process continues with the pad10contacting the printing plate12such that the ink arranged in the etching pattern16is transferred to the surface of the pad10when the pad10is removed from the printing plate12. The golf ball14is then positioned beneath the pad10. The golf ball14may be aligned such that the ink on the pad10is directly above the portion of the surface of the ball to be stamped. The pad10is then moved into contact with the golf ball14to transfer the ink from the pad10to the surface of the golf ball14. The resulting stamped golf ball14includes a marking18that corresponds to the etching pattern16on the printing plate12. The process may be repeated to print additional markings on the golf ball14, including markings at other locations by rotating the golf ball14before printing again. Disclosed embodiments by use any type of ink suitable for printing on a golf ball. There are numerous types of inks available within the printing industry, such as solvent evaporating inks, oxidation curing inks, reactive (catalyst curing or dual-component) inks, baking inks, UV curable inks, sublimation inks, and ceramic and glass inks. Solvent-based inks are predominant in the pad-printing industry, as they dry very rapidly through solvent evaporation alone. They are very versatile inks, as they are available in both gloss and matte finishes and perform very well with many thermoplastic substrates. Oxidative curing inks have limited uses in pad-printing applications due to their slow drying speed. They do, however, produce very tough, flexible, weather-resistant ink films and are very useful for printing onto metal and glass surfaces. It is possible to use 1-component inks because their long shelf life can make them easier to work with and more economical. Some 1-component inks are highly resistant to abrasion and solvents. Curing can take place physically or by oxidation. Dual-component inks are also used extensively in pad-printing and contain resins capable of polymerization. These inks cure very rapidly, especially when heated and are generally good for printing on substrates such as metals, some plastics, and glass, and have very good chemical and abrasion resistance. The inks, though, do have a restricted shelf life once the polymerization catalyst has been added. With 2-component inks, curing typically takes place over about a 5-day period at a temperature of about 20° C., or over about a 10-minute period at a temperature of about 100° C. Ceramic and gas (thermo) diffusion inks are also used in the pad-printing industry. These inks are solid at room temperature and must be heated in the ink reservoir to a temperature greater than about 80° C. Unlike solvent evaporating inks, pad wetting occurs due to the cooling effect the pad has on the heated ink rather than because of the evaporation of solvent. Ink transfer occurs because the outer surface of the ink becomes tacky when exposed to air. The ink transfer is aided by the cooler surface of the substrate to be printed on. Ultraviolet ink can also be used in the present invention. UV inks are typically cured by means of UV light having wavelengths of from about 180 nm to 380 nm. The advantages of using a UV ink are that they are fast and cure thoroughly, they are easy to use and are not affected by small changes in ambient conditions, they retain constant viscosity (i.e., they do not dry up quickly), and they use smaller amounts of combustible organic solvent, such that little or no solvent fumes escape into the working environment and are, therefore, environmentally safer. Small amounts of solvent may be added to the UV inks for certain application to enable the ink to transfer in a conventional manner. The inks may optionally contain additives such as binders, reactive prepolymers, thinners, low-viscosity mono and poly-functional monomers, photoinitiators to stimulate polymerization, stabilizing additives, flow control agents, wetting agents, pigments, extenders, or combinations thereof. The film of ink is transferred to the predetermined three-dimensional surface. In a preferred embodiment, the surface is the dimpled surface of a golf ball. In an alternative embodiment, other three-dimensional surfaces, such as golf clubs and golf shoes, are possible. The color logo or image may be printed over or under a clearcoat. Preferably, the color indicia is printed under the clearcoat. After the printing process is complete, the three-dimensional objects may be removed to a dry room to finally cure the ink used for the logo. The dry room is maintained at an elevated temperature to aid in drying the logo ink. The thickness of the ink film transferred to a golf ball can be any thickness that is sufficient to provide a clear image of the logo and can vary with the ink type and color. The thickness of the ink film is also influenced by the viscosity of the ink, the pad material, the depth of etching in the plate, and environmental factors, such as temperature, humidity, and so on. This thickness can be between about 5 μm and 75 μm, but is not limited thereto. While many stamp designs can be printed with a single pad hit onto the golf ball, there are some designs that cover a larger surface area of the golf ball and cannot be produced as one stamp. For example, a stamp design that extends more than approximately 60° around a great circle of a golf ball likely requires more than one pad hit to produce the entire marking. For example, a first stamp may cover 30-90° while a second stamp may cover an additional 30-90° in the same circumferential direction along a great circle of the golf ball to produce a stamp covering 60-180° of the great circle. In other embodiments, more than two stamps covering at least 30° each may be used to produce a linear marking extending up to 360° around a perimeter (e.g., a great circle or other continuous line) of the golf ball.FIG.2is an example of such a linear marking100in the form of a linear arrow having a first end section105, a middle section110, and a second end section115. As shown inFIGS.3A-3C, the marking100may be designed to extend approximately 180° around a centerline of a golf ball120. InFIG.3A, the first end section105terminates at a first location125of the golf ball120and transitions into the middle section110to continue around a great circle of the golf ball120, as shown inFIG.3B. The middle section110transitions into the second end section115, which terminates at a second location130of the golf ball120. In an exemplary embodiment, the first location125and the second location130are connected by an axis135of the golf ball120and thus are located 180° from each other around a great circle of the golf ball120. However, the first location125and the second location130may be any two points on the golf ball120. With the marking100extending to opposite sides of the golf ball120, it cannot be printed onto the golf ball120as a single stamp. Instead, multiple stamps applied at different sites on the golf ball120are necessary to create the marking100. FIGS.4A-4Cillustrate three separate markings140,145,150, respectively, that can be applied to a golf ball155to create the linear marking100.FIG.4Ais a front view of the golf ball155,FIG.4Bis a side view of the golf ball155, rotated 90° from the front view, andFIG.4Cis a rear view of the golf ball155, rotated 90° from the side view. The marking140is a first end section stamp, the marking145is a middle section stamp, and the marking150is a second end section stamp. When combined, the single markings140,145,150appear as the combined marking100shown inFIGS.3A-3C. The markings140,145,150may be applied to the golf ball155separately using a pad printing process at three sites on the golf ball155. For example, a first pad may apply the marking140to a first site, a second pad may apply the marking145to a second site, and a third pad may apply the marking150to a third site. The golf ball155may be rotated between printing on the various sites or may remain stationary for pads printing from different angles. The three pads used for the markings140,145,150may be the same or different in various embodiments. In order to produce a continuous marking that does not appear to be made up of different stamps, certain overlap and/or knitting features may be included with the separate stamps to ensure the linear marking is consistent in its appearance and direction around the golf ball. Disclosed embodiments include designs for stamps and printing plates having overlap sections that aid in alignment of the single stamps relative to each other and produce combined markings that do not show evidence of being composed of multiple, separate stamps. For example, the disclosed embodiments include features that match an ink density between main printed areas, composed of a single stamp and overlap printed areas composed of multiple stamps, thereby rendering the main printed areas and overlap printed areas visually identical to an desired standard. InFIGS.4A-4C, markings140and145include transition printed areas160,165, respectively. The transition printed area160is positioned at an end of the marking140and is configured to overlap a first end170of the marking145. The transition printed area165is positioned at an end of the marking145and is configured to overlap a first end175of the marking150. The transition printed areas160,165provide a guide for aligning and connecting the markings140,145,150to form one continuous line. In addition, the overlap sections160,165include a “screened” appearance to inhibit excessive darkening of the area on the golf ball55where the overlap sections160,165are printed. While the first ends170,175are shown as color-matching the remainder of the markings140,145, respectively, these printed areas can also be considered transition printed areas because they overlap the transition printed areas160,165when all of the markings140,145,150are printed on the golf ball155. FIG.5Aillustrates a first linear marking200and a second linear marking205. The first linear marking200and the second linear marking205may be printed markings on a golf ball that create a single continuous linear marking, such as a visual alignment aid.FIG.5Bis an example of a linear marking208that may be produced by printing the first linear marking200and the second linear marking205. InFIGS.5A and5B, the vertical dotted lines are shown only as boundaries between stamp sections and do not represent printed markings. The first linear marking200includes a main printed area210and a transition printed area215. The transition printed area215is positioned at an end of the first linear marking200(the right end as shown inFIG.5A). The second linear marking205includes a main printed area220and a transition printed area225. The transition printed area225is positioned at an end of the second linear marking200adjacent to the transition printed area215of the first linear marking200(the left end as shown inFIG.5B). The transition printed areas215,225are configured to be printed at the same location on the golf ball to ensure alignment of the main printed areas210,220when both markings are printed. InFIG.5B, the transition printed areas215,225are printed in the same location and the resulting appearance of an overlap printed area230matches an appearance of the main printed areas210,220to form the continuous linear marking208. In at least some embodiments, the transition printed areas215,225are configured such that the overlap printed area230does not appear darker than the main printed areas210,220. As a result, the combined linear marking208appears to be one continuous stamp on the golf ball. There are a variety of methods to quantify the appearance of printed ink. In an exemplary embodiment, the appearance of a marking is quantified using ink density, which is generally a measure of printed ink thickness for solid markings. Ink density can be expressed in units of microns. For example, a finished marking may include an ink density of approximately 5-75 μm. Ink density may be measured using a densitometer. Densitometer measurements (i.e., ink density measurements) are generally representative of a lightness or darkness of a solid marking and do not necessarily identify color. For example, a marking may have ink density measurements associated with each of the CMYK colors. As used herein, comparisons of ink density assume the same color is being measured for an even comparison. A spectrophotometer is another tool that can be used to quantify an appearance of printed markings. Spectrophotometers are configured to measure various quantifiable properties of a printed marking, including ink density, as well as reflective values, RBG color values, saturation values, etc. Consistent with disclosed embodiments, color standards based on spectrophotometer measurements may be established for determining whether two markings are sufficiently similar such that they have the same appearance. In one example, a spectrophotometer may be configured to output a delta E value, which is a measure of the difference in appearance between two printed markings. A delta E value of 1.0 may be established as a threshold for two markings being sufficiently similar such that an observer cannot identify a difference with a naked eye. Anything lower that 1.0 would be even more similar and thus also within the range of imperceptible difference. Delta E values greater than 1.0 indicate that two markings have appearances (e.g., in color, intensity, darkness, etc.) that are perceptible to the naked eye of an observer. In some embodiments, the transition printed areas215,225include equally-sized printed areas (e.g., measured in in.2) so that one can be printed over another without changing a perimeter of the printed area. In other embodiments, the transition printed areas215,225may have interlocking shapes, such as a male/female connector design. The transition printed areas215,225may include a different printed appearance (e.g., coloring, shading, etc.) than one or more of the main printed areas210,220and/or the other transition printed area215,225. In some embodiments, one of the transition printed areas215,225may match the respectively adjacent main printed area210,220such that only one of the transition printed areas215,225has a different appearance. FIG.6is a flowchart of an exemplary process for printing a marking that is made up of more than one stamp and in which the stamps are applied at different sites on a golf ball. As used herein, a “site” on a golf ball is a surface region coverable by a single pad printing application. Two “sites” may be considered different even if portions of the surface regions overlap, if at least some portions of the surface regions differ. For example, a pad may print on a first site of the golf ball, the golf ball may be rotated 45-90° and the next stamp applied to a newly aligned second site of the golf ball. In step610, the golf ball is positioned for printing at a first site. In step620, a pad receives ink from a printing plate and applies the ink to the first site of the golf ball, thereby producing a first printed area on the golf ball at the first site. For example, the first linear marking200may be printed at the first site on the golf ball. In step630, the golf ball is positioned for printing at a second site. In one example, the golf ball is rotated for printing at the second site. For instance, the golf ball may be rotated 45-90°. In another example, the printing pad is rotated to print at the second site. In yet another example, a second printing pad is arranged to print at the second site, with or without rotating the golf ball. In step640, a pad receives ink from a printing plate and applies the ink to the second site of the golf ball, thereby producing a second printed area on the golf ball. For example, the second linear marking205may be printed at the second site on the golf ball, with the transition printed area225overlapping the transition printed area215to produce the combined linear marking208. The linear marking208as printed on the golf ball thus includes the main printed area210of the first linear marking200, the main printed area220of the second linear marking205, and the overlap printed area230. In step650, the golf ball and/or pads may be positioned again, and the printing process repeated as necessary. For example, the golf ball may be rotated an additional 45-90° for printing at a third site on the golf ball, such as to produce the linear marking100made up of three single markings and having two overlap printed areas. Further, while a linear marking is described, other combined markings may be produced using this process. For example, multiple colored stamps may be applied to a first site and a second site, with at least two of the different stamps producing overlap printed areas. As described herein, the disclosed embodiments contemplate overlap printed areas that have the same appearance as adjacent main printed areas of single markings such that an observer cannot easily identify an area where stamps are overlapped. In step660of the process600, a system may perform one or more quality control measurements to confirm main printed areas match overlap printed areas. For example, a densitometer may measure an ink density of a main printed area of a first printed area and an overlap printed area. The two values may be compared to determine whether the printed areas are sufficiently similar, such as whether the measured values fall within a specified tolerance. In another example, a spectrophotometer may be used to compare the printed areas. In one example, the spectrophotometer may measure a delta E value between a main printed area of a first printed area and an overlap printed area to determine whether sufficient similarity exists. In one example, a delta E value of 1.0 or less may be determined to be acceptable. The process may be repeated to compare additional printed areas on a golf ball. For example, the overlap printed area may be additionally compared to a main printed area of a second printed area to ensure consistency across an entire linear marking. In another example, multiple main printed areas of different stamps and/or multiple overlap printed areas may be compared to each other to determine whether sufficient similarity exists following a disclosed printing process. FIGS.7A-7Dinclude examples of pairs of printed areas240,250,260, and270, respectively. Each of the individual printed areas in the pairs240,250,260,270may be portions of a single stamp on a golf ball or an entirety of a single stamp. Any of the pairs240,250,260, and270may be the transition printed areas215,225ofFIG.5and printed at the same location on a golf ball to produce an overlap printed area (e.g., the overlap printed area230). In the embodiments ofFIGS.7A-7D, each of the printed areas in the pairs240,250,260,270cover equally-sized areas (e.g., identical rectangular shapes). InFIG.7A, the pair240includes a first printed area242and a second printed area244. The first printed area242includes a “screened” or “light” appearance due to a relatively low ink density compared to printing a solid color. For example, the first printed area242may include an ink density that is approximately half of a desired ink density of a main printed area. In other embodiments, the first printed area242may include an ink density that is approximately 5-85% of an ink density of a main printed area. The second printed area244may also include a screened appearance. For example, the second printed area244may also include an ink density that is approximately half of the desired ink density of the main printed area. Described another way, each of the first printed area242and second printed area244may include an ink density that is approximately half of an ink density of an adjacent main printed area (Referring toFIG.5A, the transition printed areas215,225may include an ink density approximately half of an ink density of the adjacent main printed areas210,220). As a result, a combination of the first printed area242and the second printed area244may produce an overlap printed area having a desired ink density that is sufficiently similar to the adjacent main printed areas of the markings while also aiding in alignment of the main printed areas relative to one another. While transition printed areas that are each roughly 50% of a desired ink density (also referred to herein as a finished ink density) may combine to produce 100% of a desired ink density across an entire overlap printed area, it is contemplated that other combinations may be used and/or necessary to produce a desired appearance. For example, a transition printed area with an ink density less than 50% of a finished ink density may be combined with a transition printed area having more than 50% of the finished ink density (e.g., 30% and 70%). In an exemplary embodiment, a transition printed area may have an ink density that is approximately 5-85% of an adjacent main printed area. In another example, a combination of two overlapping printed areas may not spread evenly across a surface such that the combination does not produce 100% of a finished ink density (and thus the overlap printed area may not match the adjacent main printed areas of the individual markings). In this way, different combinations of ink densities that theoretically add up to be more than 100% of a finished ink density may be utilized. For example, two similar markings having 50-75% of a finished ink density may be combined and still produce an overlap printed area having a desired ink density that matches the appearance of the adjacent main printed areas. The pair250inFIG.7Bprovides an alternative example in which a screened first printed area252is combined with a darker-appearance second printed area254. For example, the first printed area252may include an ink density between 5-50% of a finished ink density and the second printed area254may include an ink density between 50-100% of a finished ink density. The relative ink densities that are used to produce a desired finished ink density may depend on factors including the type of ink, the surface of the item receiving the ink, the size of the stamp, the color of the ink, the type of printing pad, etc. In other embodiments, transition printed areas for producing overlap printed areas may include a gradient configuration, as shown in the pairs260,270ofFIGS.7C and7D. The pair260includes a first printed area262and a second printed area264. The first printed area262includes a gradient progressing in a first direction and the second printed area262includes a gradient progressing in a second, opposite direction. The gradients of the markings262,264may include discrete sections (e.g., four discrete sections) having progressively different ink densities producing the gradient appearance. The directions of the gradients are configured such that a lightest end of each marking262,264overlaps a darkest end of the opposing marking262,264to produce a consistent appearance of the corresponding overlap printed area. The pair270includes a first printed area272and a second printed area274. The first and second printed areas272,274have a similar gradient appearance to markings262,264but differs by having a continuous gradient (e.g., pixelated gradient) instead of the discrete gradients ofFIG.7C. The first and second printed areas272,274include gradients in opposing directions to produce a consistent appearance when overlapped having a desired ink density. FIGS.8A-8Cinclude additional examples of transition printed area pairs280,280A, and290. The pairs280,280A,290include printed areas having interlocking features. The pair280includes a first printed area282and a second printed area284. The pair280A includes a first printed area282A and a second printed area284A. The pair290includes a first printed area292and a second printed area294that are similar to the markings282,284. FIGS.8A-8Cfurther show a portion of an adjacent main printed area299for each marking282,284,282A,284A,292,294in the pairs280,280A,290. The main printed areas299are each delineated by a dotted line which forms no part of an actual marking and is shown only as a boundary. The main printed areas299include an ink density that is approximately the same as the ink density of a printed area created by printing any of the pairs280,280A,290at the same location. As shown, the printed areas282,282A,292include a main printed area299to the left of the transition printed area and printed areas284,284A,294include a main printed area299to the right of the transition printed area. In the pair280, the first printed area282includes a bracketed appearance forming a cavity286and the second printed area284includes a projection287from the adjacent main printed area299configured to fit into the cavity286. The projection287is bounded by a pair of cavities288that form a discontinuity with the adjacent main printed area299. The printed areas282,284may be printed to “overlap” in that they interlock with each other to produce a continuous marking. The ink density of the markings282,284may be the same as the ink density of adjacent main printed areas299. The interlocking feature may help to ensure alignment of the adjacent markings (e.g., to produce a continuous linear marking in combination with no deviation in direction). The pair280A may be similar to the pair280, with the cavity286A instead being a screened printed area and the printed area284A including a projection287A also being a screened printed area such that a combination of the printed areas286A,287A producing an overlap printed area having an ink density that is approximately the same as the printed areas282A and299. In the pair290, the first printed area292also includes a bracketed appearance and the second printed area294includes a projection288from the adjacent main printed area299. However instead of blank spaces, the markings292,294include screened sections296,298to complete a rectangle. In this way, the markings292,294overlap by interlocking and overlaying in certain portions. In both examples, a resulting overlap printed area may include a consistent appearance that matches an appearance of the adjacent main printed areas that are composed of a single stamp (e.g., a matching ink density). In the pairs280A and290B, the printed areas include main sections that interlock and overlap sections that overlay each other (with some sections performing both functions). For example, inFIG.8B, the printed areas282A and287A may be main sections that interlock and the printed areas286A and287A may be overlap sections that overlay each other. InFIG.8C, the printed areas292and297may be main sections that interlock and all of the printed areas292,296,297,298may be overlap sections that overlay each other. The combination of these features may assist in alignment of the two stamps relative to each other while producing a desired appearance. The disclosed embodiments include linear markings that require more than a single stamp to produce the length or size of marking desired. For example, disclosed embodiments can produce linear markings that extend from 60-360° around the golf ball. The disclosed markings having transition printed areas enable the combination of two or more stamps to overlap and produce an overlap printed area that matches an appearance of adjacent main printed areas that are composed of a single stamp, as well as providing features to aid in alignment of the stamps relative to each other. The linear markings can have a consistent one-color appearance or may be multi-colored. Markings produced by disclosed embodiments do not need to be a consistent shape. Markings also do not need to be continuous in appearance (e.g., printed areas can include spaces of blank or non-printed areas therebetween). Combined markings may include letters, numbers, characters, symbols, arrows, etc., that are arranged in a linear direction. The disclosed features may be applied to these and other marking designs to produce a consistent appearance in which the overlap of two stamps is not identifiable to the naked eye of an observer. FIGS.9A-9B,10A-10B,11A-11B, and12A-12Binclude additional examples of applications of the disclosed embodiments. InFIGS.9A-9B, a first printed area310and a second printed area320may be separately stamped on a golf ball (not shown) to produce a combined marking330. The combined marking330may be a logo or other indicia that is more complex than the linear markings described above. For example, the combined marking330may include multiple shapes or pictures that extend across enough of a surface of the golf ball such that more than one stamp at different sites is necessary to produce the combined marking330. The first printed area310may include a main printed area312and a transition printed area314. The second printed area320may include a main printed area322and a transition printed area324. The transition printed areas314,324may include one or more of the transition printed area features described herein, such as a screened or gradient appearance and/or interlocking features such that overlapped printing of the transition printed areas314,324produces an overlap printed area332having an appearance that matches at least a portion of one or more of the main printed areas312,322. For example, the overlap printed area332may include an ink density that is approximately the same as immediately adjacent portions of the main printed areas312,322(i.e., the adjacent portions of the shape that includes the transition printed areas314,324). InFIG.9B, the main printed areas312,322are each delineated from the overlap printed area332by a dotted line which forms no part of an actual marking and is shown only as a boundary. FIGS.10A-10Bdepict another embodiment and includes a first stamp340and a second stamp350. Each of the first stamp340and the second stamp350include spaced printed shapes (i.e., printed areas that are separated by blank or non-printed areas). In the embodiment ofFIGS.10A-10B, the stamps340,350include spaced arrows pointing in a common direction. The stamp340include a main printed area342and a transition printed area344. The stamp350includes a main printed area352and a transition printed area354. The stamps340,350may be printed on a golf ball (not shown) such that the transition printed areas344,354are printed at the same location and thus overlap one another. The resulting combined marking360may include an overlap printed area362at the location of the printing of the transition printed areas344,354. In an exemplary embodiment, the main printed areas342,352each include at least one of the spaced printed shapes (e.g., one or more of the arrows). The transition printed areas344,354each include at least one of the spaced printed shapes (e.g., one or more of the arrows). The printed shapes in the main printed areas342,352may be printed in a finished ink density while the printed shapes in the transition printed areas344,354may be printed with an ink density that is the same as or similar to any of the other transition printed areas described herein. For example, the transition printed areas344,354may each be printed with a screened appearance or gradient comprised of less than 100% of the finished ink density of the main printed areas342,352. As a result, the overlap printed area362may be a spaced printed shape that includes an appearance matching the spaced printed shapes in the main printed areas342,352. For example, the overlap printed area362may be an arrow that matches color and appearance of the other arrows in the combined marking360. FIGS.11A-11Bdepict another embodiment and include a first linear marking370, a second linear marking380, and a third linear marking385which may be printed to produce a combined linear marking390. The combined linear marking390may be similar to the combined linear marking208, such as a single continuous linear marking, such as a visual alignment aid. InFIGS.11A and11B, the vertical dotted lines are shown only as boundaries between stamp sections and do not represent printed markings. The first linear marking370includes a main printed area372and a transition printed area374. The transition printed area374is positioned at an end of the first linear marking370(the right end as shown inFIG.5A). The second linear marking380includes a main printed area382and a transition printed area384. The transition printed area384is positioned at an end of the second linear marking380adjacent to the transition printed area274of the first linear marking370(the left end as shown inFIG.5B). Unlike the embodiment ofFIGS.5A-5B, the transition printed areas374,384may not be printed at the same location on the golf ball (i.e., the transition printed areas374,384do not overlap each other). Instead, the transition printed area374may be printed to overlap a first portion of the third linear marking385and the transition printed area384may be printed to overlap a second portion of the third linear marking385. InFIG.11B, the transition printed areas374,384are printed to overlap the third linear marking385which serves as a supplemental transition printed area and the resulting appearance of an overlap printed area392matches an appearance of the main printed areas372,382to form the continuous linear marking390. In this embodiment, the transition printed areas374,384, and the third linear marking385, as the supplemental transition printed area, individually include ink densities less than an ink density of the main printed areas372,382. However, the separate but adjacent combinations of the transition printed areas374,384and the third linear marking385produce the ink density of the main printed areas372,382. As a result, the combined linear marking390appears to be one continuous stamp on the golf ball. FIGS.12A-12Bdepict an alternative embodiment related to designs that include multiple colored and/or otherwise distinct sections. Transition printed areas as described in at least some of the above embodiments include features to produce an overlap printed area that matches an appearance of adjacent main printed area (i.e., sufficient similarity based on ink density). In other embodiments, two markings may include transition printed areas that combine to produce a distinct component of an overall indicia design. InFIGS.12A-12B, a stamp410and a second stamp420may be printed by separate single pad hits on a golf ball (not shown) that combine to produce a combined marking430. The first stamp410may include a main printed area412and a transition printed area414. The main printed area412and the transition printed area414may be printed in a first color. The second stamp420may include a main printed area422and a transition printed area424. The main printed area422and the transition printed areas424may be printed in a second color, which may or may not be the same as the first color. In an exemplary embodiment, the transition printed areas414,424may be configured to be printed at the same location on the golf ball to produce an overlap printed area432forming a distinct section of the overall stamp design. The overlap printed area432may be a combination of the first color and the second color to produce a third color. In one example, the first color is red, the second color is yellow, and the third color is orange (a combination of red and yellow ink being printed on top of one another). In another example, the first color and the second color are the same color (e.g., blue) and the third color is different, darker version of that color as a result of having a greater ink density at the overlap printed area432. In another embodiment, a gradient or ombre appearance of colors may be produced by overlapping colored transition zones. In the embodiment ofFIGS.12A-12B, the overlapping of two transition printed areas may be utilized to add to an overall stamp design, such as to introduce a new color into the design, while additionally helping to align adjacent separate stamps during printing to ensure proper positioning on the golf ball. The disclosed embodiments describe stamp designs that may be printed to produce combined markings on golf balls or other items. The single stamps include features, such as transition printed markings, that overlap with portions of other stamps to help align the stamps relative to each other and produce a desired combined appearance, such as a shading, lightness/darkness, color, etc., that matches the adjacent single stamp printed areas. The disclosed stamp designs may be pad printed using printing plates configured to produce the desired printed areas that make up the stamps. FIG.13Ais a top view of an embodiment of a printing plate500that may be used in a disclosed process, such as a pad printing process.FIG.13Bis a cross-sectional view of the printing plate500, taken at line A-A ofFIG.13A. The printing plate500includes an etching pattern510. The etching pattern510may be one or more depressions or wells formed in a surface of the printing plate500. The etching pattern510may be configured to receive ink for pad printing on a golf ball to produce a marking. The etching pattern510includes different sections having varying etch depths (ED) that correspond to different portions of the marking to be printed on the golf ball, such as a main printed area and a transition printed area. As described herein, a transition printed area may include a lesser ink density than a main printed area. The printing plate500may include the variation in etch depth in order to achieve the variation in ink density in the marking. In a first embodiment, the etching pattern510includes a first etch section512and a second etch section514. The first etch section512includes a first etch depth ED1and the second etch section514includes a second etch depth ED2. According to an exemplary embodiment, the first etch depth ED1may be approximately 10-22 μm. In another embodiment, the first etch depth ED1may be approximately 15-17 μm. The second etch depth ED2is less than the first etch depth ED1such that the first etch section corresponds to a portion of a marking that is a main printed area and the second etch section514corresponds to a portion of a marking that is a transition printed area. For example, the second etch depth ED2may approximately 5-85% of the first etch depth ED1. For instance, in one embodiment, the second etch depth ED2may be approximately 0.5-18.7 μm. In another embodiment, the second etch depth ED2may be approximately 0.75-14.5 μm. As a result, the printing plate500may be used to produce a marking having a main printed area with a finished ink density and a transition printed area with an ink density less than the finished ink density. The printing plate500may be used in combination with another printing plate for producing a second stamp also having a transition printed area to overlap the transition printed area produced using the printing plate500. For example, another printing plate may include an etching pattern that is a mirror image of the etching pattern510(e.g., the second etch section on the opposite end of the first etch section). Other combinations of printing plates having varying etch depths may also be used to produce a desired stamp design. In the embodiment ofFIG.13B, the first etch section512and the second etch section514are have a step configuration to produce two sections having constant etch depths. However, other embodiments may have other configurations.FIG.13Cincludes an alternative cross-sectional design for the printing plate500, including a first etch section512A and a second etch section514A. The first etch section512A includes a constant etch depth ED3while the second etch section gradually increases in depth from the etch depth ED3to a terminal etch depth ED4. The etch depth ED3may be the same as the ED1in some embodiments (e.g., 15-17 μm). In some embodiments, the etch depth ED4may be a small fraction (e.g., 5-10%) of the etch depth ED3(e.g., 0.75-1.5 μm). In other embodiments, the etch depth ED4may be zero such that the second etch section514A gradually transitions into the surface of the printing plate500. Additional embodiments may include other configurations for the first and/or second etch section, such as a curved or actuate configuration for the second etch section. In some embodiments, an etch depth may remain constant between different sections while an etch volume is varied. For example,FIG.13Dincludes another alternative cross-sectional design for the etching pattern510, including a first etch section522and a second etch section524. The first etch section522and the second etch section524include the same etch depth ED5, however, the second etch section524include protrusions526to lessen the volume of the second etch section524and thereby produce a stamp section with a lesser ink density compared to a stamp section produced by the first etch section522. While it is apparent that the illustrative embodiments of the invention disclosed herein fulfill the objectives stated above, it is appreciated that numerous modifications and other embodiments may be devised by those skilled in the art. Therefore, it will be understood that the appended claims are intended to cover all such modifications and embodiments, which would come within the spirit and scope of the present invention.
44,465
11857848
DETAILED DESCRIPTION OF THE DRAWINGS The present disclosure is directed to golf club heads that are produced using an additive manufacturing process (i.e., printed layer by layer). In particular, a golf club head of the present disclosure includes a club head body that is manufactured using an additive manufacturing process and may be fabricated from a metal material or a metal alloy. In some embodiments, the club head body may include a segmented or lattice portion that is created during the additive manufacturing process and, therefore, is formed integrally with the club head body (i.e., the lattice portion and the club head body are a unitary component). In general, the incorporation of a segmented or lattice portion enables various material and/or performance characteristics of a golf club head to be selectively manipulated to achieve, for example a desired CG locations, MOI's, mass properties, face flex, distance variability, launch conditions, aesthetics, among other things. The use of the terms “segmented portion,” “lattice portion,” or “lattice structure,” herein refer to portions of a golf club head that are formed by one of a plurality of interconnected segments, interconnected shapes, or connected surfaces. In some embodiments, the plurality of interconnected segments, interconnected shapes, or connected surfaces may be formed integrally with a club head body by an additive manufacturing process. In some embodiments, the lattice portion may define at least one cutout, or absence of material, that is formed within a unit cell (e.g., a repeated pattern defined by the lattice structure). The use of a lattice portion within a golf club head may allow various manufacturing and performance characteristics to be modified or customized. For example, a lattice portion may define a substantially reduced weight or density when compared to a solid material. As such, the placement of a lattice portion within a golf club head may be varied using an additive manufacturing process to selectively locate the CG of a golf club head in a desired location. In addition, the incorporation of a lattice portion into a golf club head may reduce the overall volume of material needed to manufacture the golf club head. The golf club heads disclosed herein may be manufactured using one or more of a variety of additive manufacturing processes. For example, a golf club head according to the present disclosure may be at least partially fabricated using a metal powder bed fusion additive manufacturing processes that fuses, melts, or bonds metal powder particles layer by layer along a build plane. In some embodiments, the metal powder particles may be melted or fused by a laser that forms cross-sections of a golf club head layer by layer along a build plane. In some embodiments, the metal powder particles may be melted or fused by an electron beam or ultrasonic energy to form cross-sections of a golf club head layer by layer along a build plane. In some embodiments, the metal powder particles may be bonded to form cross-sections of a golf club head layer by layer along a build plane via the deposit (e.g., printing) of a binder. The various methods of additive manufacturing used to manufacture a golf club heads according to the present disclosure may include binder jetting, direct energy deposition, selective laser melting (SLM), direct metal laser sintering (DMLS), fused deposition modeling (FDM), electron beam melting, laser powered bed fusion (LPBF), ultrasonic additive manufacturing, material extrusion, material jetting, Joule printing, electrochemical deposition, cold spray metal printing, DLP metal printing, Ultrasonic Consolidation or Ultrasonic Additive Manufacturing (UAM), LENS laser-based printing, electron beam freeform fabrication (EBF3), laser metal deposition, or carbon fiber additive manufacturing. Referring now toFIGS.1-8, a putter-type club head40is shown in accordance with the present disclosure that may be formed through an additive manufacturing process. The club head40defines a body42and a face insert44, which may be coupled to one another after machining of the body42, as will be discussed in greater detail below. The body42defines a toe side46, a heel side48, a front side50, a top side or crown52, a bottom side or sole54, and a rear side56. Referring toFIG.1, the body42of the club head40is formed from metallic and/or non-metallic materials. For example, the body42may be formed from any one of or a combination of aluminum, bronze, brass, copper, stainless steel, carbon steel, titanium, zinc, polymeric materials, and/or any other suitable material. The body42includes a front portion60and a rear portion62, the front portion60defining a face insert cavity64(seeFIGS.9and11), that is configured to receive the face insert44. The face insert44defines a striking surface68. The striking surface68comprises an entirety of the front surface of the face insert44, and is configured for contacting a golf ball. A peripheral edge70of the face insert44aligns with an inset edge72of the face cavity64(seeFIG.9) of the body42. The striking surface68further defines a first surface74, a second surface76, a third surface78, and a fourth surface80that define various angles with respect to a plane normal to the ground when the club head40is at address. The first surface74of the striking surface68may define an angle of about 1 degree, the second surface76may define an angle of about 2 degrees, the third surface78may define an angle of about 3 degrees, and the fourth surface80may define an angle of about 4 degrees. However, in come embodiments the surfaces74,76,78,80may define different angles, or may define the same angle. To that end, the striking surface68may comprise only a single, planar surface that defines a constant angle. Referring toFIG.3, the body42defines a toe portion or region84, a medial portion or region86, and a heel portion or region88. The heel region88of the body42includes a hosel90that extends upward therefrom. In some embodiments, the heel region88defines an aperture (not shown) that is disposed within the heel region88, which is configured to receive and secure a shaft (not shown) of the golf club (not shown). Referring specifically toFIGS.3and4, the heel side48of the body42is rounded and extends from a lower heel-side inflection point96to an upper heel-side inflection point98. The sole54of the body42intersects with the heel side48at the lower heel-side inflection point96, while the crown52of the body42intersects with the heel side48at the upper heel-side inflection point98. The sole54defines a heel segment100, a medial segment102, and a toe segment104. The heel segment100and the toe segment104are generally angled and planar when viewed in elevation, while the medial segment102connects the toe segment104with the heel segment100and is generally planar. Further, portions of the medial segment102are parallel with respect to the ground (not shown) when the head40is at address. A portion of the toe segment104curves upward to a lower toe-side inflection point108where the toe segment104of the sole54intersects with the toe side46. A portion of the toe side46curves upward and inward, in a direction of the hosel90, and defines a generally straight portion of the toe side46that extends to an upper toe-side inflection point110. The top side52intersects with the toe side46at the upper toe-side inflection point110. When viewed from the front, the top side52extends laterally from the upper toe-side inflection point110to the upper heel-side inflection point98, and is interrupted by the hosel90. Referring toFIG.3, the toe region84, the medial region86, and the heel region88are defined by vertical lines or planes P1and P2that extend through intersections of the heel segment100and the medial segment102, and the toe segment104and the medial segment102, respectively. The hosel90is located within the heel region88, and extends vertically from the top side52. In some embodiments, the hosel90may be at least partially disposed within the medial region86. The hosel90includes a plurality of cutouts112defined within a hosel arm114, which are generally in the shape of alternating triangles. The cutouts112may extend entirely through a width of the hosel90, or the cutouts112may not extend entirely through the hosel90, i.e., in the present embodiment, the hosel90does not include apertures that extend completely through the hosel arm114. In some embodiments, the cutouts112may align on a front and rear of the hosel (seeFIGS.3and4). In alternative embodiments, only the front side of the hosel90may include the cutouts112or only the rear side of the hosel90may include the cutouts112. A shaft bore116extends from the hosel90, the shaft bore116being sized and shaped to receive a shaft (not shown), or an element that may be coupled with the shaft. Referring again toFIG.1, a surface defining the front region60of the top side52is generally planar, while surfaces defining the rear region62of the top side52comprise a plurality of depressions, recesses, and other features. The front region60and the rear region62of the top side52are separated by a seam or groove120that extends from the heel side48to the toe side46. However, in embodiments that do not include the seam or groove120, the front region60and the rear region62are defined by a plane that extends vertically through the seam120. A shaft cavity122is further shown inFIG.1, the shaft cavity122defining a cylindrical cavity within the shaft bore116into which the shaft (not shown) may be inserted. The shaft cavity116may be modified or formed to achieve any number of putter shaft positions, including heel, centered, and hosel offset. Still referring toFIG.1, the face insert44is attached to or press fit within the insert cavity64of the body42. In some embodiments, the face insert44is secured and anchored via an interlocking structure (not shown). As provided in the cross-sectional views below, a bonding agent or adhesive126(seeFIGS.17and23) may be used to help secure the face insert44into the face cavity64. Regardless of the type of retention mechanism used, the face insert44is fixed securely within the face cavity64of the body42. Referring now toFIG.2, a rear view of the club head40is shown. A head cavity130is visible from the rear view, which houses a first weight132, a second weight134, and an internal lattice structure136. In some embodiments, the club head40may not include the first weight132and the second weight134. For example, the club head40may include solid material, the internal lattice structure136, or a cavity (i.e., no material) in place of the first weight132and the second weight134. In the illustrated embodiment, the lattice structure136is unitary with the body42, i.e., the lattice structure136comprises the same material and is manufactured at the same time as they body42. The first weight132and the second weight134are separate components, which may comprise tungsten or another type of metal. The lattice structure136is preferably 3D printed with the rest of the body42. However, in certain embodiments, the lattice structure136may comprise a separate cartridge that is insertable into the cavity130. The first weight132is located within the heel region88, the second weight134is located within the toe region84, and the lattice structure136extends across the heel region88, the toe region84, and the medial region86. Still referring toFIG.2, the front portion60and the rear portion62of the club head40are shown separated by the groove120. As noted above, outer sides defining the front portion60are generally planar, while the rear portion62defines a rear upper side140and an inset region142. The inset region142defines a first or upper inset region144and a second or lower inset region146. The upper inset region144is defined by a first inset side148, which is a beveled edge that extends downward from the rear upper side140toward the sole54. A first inset platform150extends from portions of the first inset side148, the first inset platform150being generally parallel with respect to the rear upper side140. The first inset side148is generally U-shaped, and defines a periphery of the first inset region144. The second inset region146is also shown inFIG.2, the second inset region146being defined by second inset sides152that are disposed on opposing sides of an alignment platform154. The alignment platform154includes a plurality of alignment notches or features156. The plurality of alignment features156may comprise any number and any type of designs that are sufficient to aid a golfer to align the putter-type golf club head40with a cup. In the present embodiment, the alignment features156are notches that are three dimensional features; however, in alternative embodiments, the alignment features156may be planar features, and may be painted along the alignment platform. A central alignment feature158is disposed centrally along the alignment platform154, and is configured to allow a golfer to align the putter head40with the cup before striking a golf ball (not shown). A window160is disposed within the second inset region146, between the second inset sides152, the window160being an opening that allows for air to enter the cavity130above the alignment platform154. As will be discussed in greater detail below, it is preferable to include blow through apertures along varying portions of a 3D printed putter head to allow excess material to be removed from the putter head40during the manufacture thereof, i.e., de-caking. It is for at least this reason that various apertures may be included along portions of the club40during at least some stages of the manufacturing process. Any commercial blower or air moving device may be used to blow excess material from within the putter head40. In some embodiments, a vacuum may be used to suck excess material from within the putter head40. In other embodiments, one or more tools including brushes, chisels, picks, or other implements are used to manually remove powder from within the putter head40. During post-printing processing, excess powder may be vacuumed or blown off of a build box that may include one or more of the putter heads40. After initially vacuuming or blowing, manual material removal is done to remove excess material from the putter head40. At this stage, remaining excess powder may be removed with one or more of the above-noted tools. Still referring toFIG.2, the profiles of the alignment features156may define a variety of shapes or cross sections that are sufficient to delineate the size and shape of the alignment features156. The alignment features156may define shallow grooves in the alignment platform154, the depths of which may be selected to sufficiently enable application and retention of a paint fill. In some embodiments, the alignment features156are filled with a paint or other organic coating that may be distinguished in appearance from its surrounding environment. In some embodiments, the grooves are partially or entirely filled with a material distinguished in appearance from its surrounding environment, e.g., a colored opaque or translucent polymer. Referring now toFIG.4, the first and second weights132,134, the window160, the alignment platform154, the first inset side148, and the lattice structure136are shown in greater detail. The cutouts112along the hosel90are also visible in the rear view ofFIG.4. The first weight132and the second weight134are shown snugly disposed between an upper retention feature166and a lower retention feature168. The upper and lower retention features166,168generally define cylindrical portions having voids therebetween that allow the first and second weights132,134to be inserted therein, such that the first and second weights132,134fit snugly between the upper and lower retention features166,168. In some embodiments, a lock and key feature (not shown) within the cavity130retains the first and second weights132,134in place, so as to prevent undesired rotation of the first and second weights132,134. While the first and second weights132,134are shown having a particular diameter, varying types and sizes of weights are contemplated. In some embodiments, the weights132,134are removable, and may be removed and replaced by a user or a technician. As shown in the figures, the first and second weights132,134define an outer diameter D1that is identical, and that is larger than an outer diameter D2of the upper and lower retention features166,168. Further, while the first and second weights132,134are shown being disposed entirely within the heel region88and the toe region, respectively, it is contemplated that the first and second weights132,134may extend across one or more of the regions100,102,104. Still referring toFIG.4, the lattice structure136is shown in greater detail. The lattice structure136is defined by a plurality of angled segments172, a plurality of horizontal segments174, and a plurality of vertical segments176, which combine to form a plurality of triangles or triangular portions. Air spaces are formed between the plurality of segments172,174,176, which may be filled with a filler material in some embodiments, as discussed in greater detail below. An outermost or rearmost row178(seeFIG.23) of the lattice structure136defines four separate right triangles, each of the right triangles being partially defined by one of the angled segments172and one of the vertical segments176. An innermost row180is also shown inFIG.23. Curved rounds are defined at intersection points182of the segments172,174,176. The intersection points182are rounded (e.g., define a curvature, or a radius of curvature, and are not formed by the intersection of one or more straight lines) rather than cornered for manufacturing purposes. For example, it has been found that the overall strength of the lattice structure136is increased with the inclusion of curved rounds at the intersection points182. Through testing, it has been determined that when the intersection points182define sharp corners, the lattice structure is more likely to crack or break. Adding radii to sharp edges within geometry that is formed through 3D printing solves several issues, including: helping with de-caking (helps against green part destruction when blowing air against the lattice structure136), reducing sintering drag, and avoiding stress concentrations by adding radii on the edges of the lattice structure136. In some embodiments, the club head40may be 3D printed using binder jetting, which is a cost-effective way to produce low batch production with geometries that cannot be efficiently manufactured using conventional manufacturing methods. Metal binder jetting builds components by depositing (e.g., printing) a binding agent onto a layer of powder through one or more nozzles. The club head40is 3D printed, layer by layer, along of a first or build plane, as discussed in greater detail herein. The printing occurs at room temperature, or slightly above room temperature, which means that thermal effects are typically not present in the final printed components. However, printing may occur at higher or lower temperatures. Metal binder jetting is a two-stage process, and involves a printing step and an essential post-processing step (sintering). Binder jetting involves spreading a thin layer of metal powder over a build platform, selectively depositing droplets of a binding agent that bonds the metal powder particles, and repeating the process until the build is complete. Once the build process is complete, the printed part may be excavated from the powder in the build platform and subsequently removed from the build platform. The result of the printing process is a part that is in the so-called “green” state, which is moved to a post-processing step to remove the binding agent and create the metal part. After the club head40has been printed, additional intermediate steps may be required before the club head40enters into a sintering step. In some embodiments, the part may need to go through a curing stage to allow the binder to set properly. Still further, in some embodiments before sintering, a de-binding step may be required to drive out any remaining binder. However, in some embodiments the curing step and the de-binding step may not be needed. There are two variations for the post-processing step. When using infiltration, the green part is first washed off from the binding agent to create a “brown” part with significant internal porosity, e.g., 70%. The brown part is then heated in an oven in the presence of a low-melting-point metal, such as bronze. The internal voids are filled, resulting in a bi-metallic part. When using sintering, the green part is placed in an industrial furnace. There, the binder is first burned off and the remaining metal particles are sintered together. The result is a fully metal component having dimensions that are approximately 20% smaller than the original green part. To compensate for shrinkage, the parts are printed larger, i.e., about 10%, or about 15%, or about 20%, or about 25%, or about 30% larger than final club head40. In some embodiments, the parts are printed between about 10% and about 30% larger, or between about 15% to about 25% larger, or between about 16% and about 20% larger. In some embodiments, the larger dimensions defined by the printed part (pre-sintering) may leave enough material to enable a printed club head to meet factory finish standards. In some embodiments, the golf club head may be machined (e.g., via milling or turning) post-sintering to obtain, for example, the loft, lie, weight, dimensions, volume, shape, etc., defined by the factory finish. In some embodiments, the club head40may be 3D printed using DMLS, or another one of the above-listed additive manufacturing techniques. In embodiments where the club head40is created using DMLS, a high powered laser is used to bond metal particles together, layer by layer, to create the club head40. While the process of DMLS involves fusing material particles to one another on a molecular level, many different metal alloys are compatible with this type of additive manufacturing technique. After printing, i.e., after a laser has selectively bonded the metal particles to one another, the club head40is cooled and loose powder is extracted. Post-processing steps may involve stress relief via thermal cycling, machining, heat treatment, or polishing. Various other post-processing steps may also be involved through printing of the club head40using DMLS or any of the above techniques. For example, in some additive manufacturing processes (e.g., DMLS) one or more supports (not shown) may be included on the club head40during printing to prevent the part from warping. Further, in DMLS, because the printed club head40is bonded to a build plate, a method of cutting may be required to cut the printed parts from the build plate. Electrical discharge machining (EDM) may be used to cut the printed parts from the build plate. Cutting or removing the parts may be required when using DMLS to build the parts, but may also be required when using other forms of additive manufacturing such as directed energy deposition DED or material extrusion. Referring now toFIGS.5and6, side profiles of the club head40are shown in detail. More specifically, the toe side46is shown inFIG.5, while the heel side48is shown inFIG.6. The sole54or underside of the club head40is visible in the figures, and a plurality of design elements are visible spanning the front portion60and the rear portion62of the sole54. Fastener apertures190are also visible, the fastener apertures190being sized and shaped to allow fasteners192(seeFIG.8) to be inserted into the fastener apertures190, to thereafter retain the first and second weights132,134in position. The fastener apertures190are formed after the 3D printing process has occurred, i.e., in a post-printing state, as will be discussed in greater detail hereinafter below. Still referring toFIGS.5and6, the hosel90is shown in greater detail, the hosel90being disposed at an angle offset from a plane that is normal with respect to the ground. The front face of the body42is also shown disposed at an offset angle with respect to a plane that is normal with respect to the ground when the club head40is at address. The front face50and the hosel90are angled in opposing directions with respect to the plane that is normal with respect to the ground when the club head40is at address. Referring now toFIGS.7and8, top and bottom views of the club head40are shown in detail. Referring specifically toFIG.7, the front portion60and the rear portion62are clearly shown being separated by the groove120. However, as noted above, in embodiments that do not include the groove120, the front portion60and the rear portion62are separated by a plane that extends through the groove120. The shaft bore116and shaft cavity122are also shown in greater detail. The shaft cavity122is disposed at an offset angle with respect to an axis normal to the ground when the club head40is at address. The planar portions along the front region60of the body42are also shown clearly inFIG.7. Further, the first and second inset regions144,146are depicted, and the plurality of alignment features156are shown surrounding the central alignment feature158. As illustrated inFIGS.7and8, a cutout region196is visible, the cutout region196following a profile of the second inset sides152and an outer edge198of the alignment platform154when viewed in the plan views ofFIGS.7and8. While the term “cutout” is used herein, it should be appreciated that the manufacturing techniques utilized to create the club head40may or may not require the physical removal or grinding down of some portions, while certain portions do have to be removed or otherwise grinded down, as discussed with respect toFIGS.9-13below. As such, a “cutout” may refer to a portion that is devoid of material, not necessarily a portion that has had material that has been physically removed therefrom. Referring toFIG.8, a bottom view of the club head40is shown. Various design features200are shown spanning the front portion60and the rear portion62of the sole54, and two fasteners192are shown along the rear portion62of the sole54, the fasteners192being aligned with the weights132,134. The cutout region196is visible inFIG.8, which is shown defining various curved and straight surfaces. The fasteners192are shown disposed on opposing sides of the cutout region196. In some embodiments, the fasteners192are configured to be removed. However, in some embodiments, the fasteners192are permanently affixed to the club head40via an adhesive or another type of retention mechanism. The particular location of the fastener apertures190may be adjusted depending on a desired weight or center of gravity (CG) of the club head40. Still further, additional weights (not shown) may be added along the club head During manufacturing of the club head40, the weights132,134are inserted into the head cavity130and secured to the club head40via one or more fasteners, an adhesive, or another type of securement mechanism. Referring now toFIGS.9-13, a golf club head post-printed component204is shown. The post-printed component204depicts the club head40in a post-printed, pre grinded state. Further, the post-printed component204is shown without the face insert44applied to the body42, thus, the face insert cavity64is visible, the face insert cavity64being at least partially defined by an insert wall206. The post-printed component204is preferably formed using binder jetting, as described above. The post-printed component204may be printed at an angle that is offset by about 30 degrees with respect to the orientation shown inFIG.12, i.e., 30 degrees counterclockwise. In some embodiments, the post-printed component may be printed at an angle of between about 5 degrees and about 60 degrees offset, or between about 10 degrees and about degrees offset, or between about 20 degrees and about 40 degrees offset from the orientation shown inFIG.12, i.e., from when the component204is at address. When manufacturing a golf club head via an additive manufacturing process, it is beneficial to ensure that the layer lines created during the additive manufacturing process avoid sharp surface interfaces (e.g., corners, edges, etc.) that fall along layer line edges. For example, in a binder jetting process, if a golf club head is printed such that the front face or striking surface is arranged parallel to the build plane (e.g., the front face is printed flat), the printed club head may show visible layer lines at shallow elevation changes, which may produce sharp corners that fall directly on a layer line edge and create cracks. The rotational offset that the post-printed component204is printed at, described above, may aid in preventing the printing of visible layer lines with sharp corners that fall on the layer line edge. In addition, printing at the rotation offset may prevent cracking of the green part during the print or sintering stages. Further, the rotational offset that the post-printed component204is printed at may also aid in Z-height limitations in, for example, a binder jetting process. For example, a thickness in the Z-direction (i.e., a height defined by a layer perpendicular to the build plane) may be reduced as the layers increase in Z-height during a binder jetting process. That is, the lower layers lay define an increased thickness relative to the upper layers due to weight of the overall structure weighing down on the lower layers. By printing the post-printed component at a rotational offset, the total Z-height defined by the component during the build is reduced, when compared to printing the component in the orientation ofFIG.12. In some embodiments, the club head40may be printed in multiple components. For example, the hosel90and the body42may be printed, via binder jetting, as separate components. In this way, for example, the Z-height defined by the components being printed may be further reduced and the build efficiency (i.e., the amount of components printed during a build job) may be increased. Referring specifically toFIG.9, the face insert cavity64is shown in greater detail. The face insert cavity64is defined by the peripheral edge70that generally corresponds with an outer profile of the face insert44(seeFIG.1) and the insert wall206. A material deposit208is centrally disposed within the face insert cavity64and extends outward from the insert wall206, the material deposit208defining a planar surface210and an outwardly extending platform. The material deposit208is intended to be machined off of the club head40. However, in some embodiments, only a portion of the material deposit208may be removed from the post-printed component204. The centrally disposed material deposit208may be provided or printed along what may be considered the “sweet spot” of the club head40. As a result, the machining of the centrally disposed material deposit208may allow for removal to enhance or otherwise modify the sweet spot. The location and size of any remaining portion of the material deposit208within the inset cavity may affect the characteristic time (“CT”) of the club head40. Still referring toFIG.9, a first or toe-side aperture214and a second or heel-side notch216are shown within the insert cavity64. As noted below, the heel-side notch216becomes the heel-side aperture216after processing of the post-printed component204. The toe-side aperture214is sized and shaped to allow air to flow through the post-printed component204during the manufacturing process to allow certain post-production material to be removed from the post-printed component204. The toe-side aperture214may also be sized and shaped to receive one or more portions of the face insert44, for example, in a lock-and-key fashion, so as to retain the face insert in place within the insert cavity64. During the post-processing step of manufacturing, the heel-side notch216is machined to become a heel-side aperture216, similar to the toe-side aperture. In some embodiments, there may be one or more additional apertures that are provided along the insert wall206. Since the particular face insert44described herein has regions defining different degrees, the face insert cavity64may be sized and shaped differently to receive alternatively shaped inserts. Referring now toFIG.10, the sole54of the post-printed component204is shown in greater detail. A rear side of the hosel90is also shown in greater detail. As shown in this particular view, the post-printed component204includes several locations with additional material deposits208that are ultimately removed during a post-processing step. However, since the post-printed component204is depicted in a form after having been 3D printed, various portions of the post-printed component204include the material deposits208, which are machined off or are otherwise removed during a post-printing process. For example, and still referring toFIG.10, the material deposits208along the sole54that are cylindrical in nature are formed where the fastener apertures190are disposed in the final form of the club head40. Still further, one of the material deposits208is shown extending outwardly from the hosel90. The material deposits208may be formed in varying locations along the post-printed component204, which may be exist after 3D printing because of one or more factors associated with 3D printing. For example, certain material deposits may be formed to enhance certain structural features of the post-printed component204during post-processing steps. Still further, material deposits may be formed or printed because of the technique that is utilized for manufacturing the post-printed component204, or to aid in verifying specifications, machining, polishing (as guides), or fixturing the post-printed component. In some embodiments, the one or more material deposits208may be provided so as to act as a reference circle to indicate a center of a desired bored or tapped hole. For example, the material deposits208located along the sole54are concentric circles that indicate where the hole should be drilled through in which the weights are located. The material deposits208may be a specified height so as to more easily machine portions of the post-printed component204. Referring toFIG.11, the material deposits208along the sole of the post-printed component204are shown more clearly. The toe-side aperture214is also shown in greater detail, and the upper and lower retention features166,168for the first weight132are visible through the toe-side aperture214. Guide holes218are shown disposed along the centrally raised planar surface210. A centered hole219is also shown inFIG.11, which, in combination with the guide holes218, are used to center the post-printed component204for various post-printing processes. For example, the centered hole218is located in the geometric center of the post-printed component204, and may be used as a machining “chuck” to elevate the post-printed component204and allow for machining of the various surfaces of the post-printed component. By placing the hole219centrally along the surface210, various efficiencies are achieved since it is preferable to elevate the post-printed component204by machining surfaces to tighter tolerances. Referring toFIG.12, the material deposit208that extends from the shaft bore116is shown in greater detail. Further, the shaft cavity116is entirely filled in, i.e., there is no shaft cavity122until the material disposed within the shaft bore116has been machined out to create the shaft cavity122. A rear view of the post-printed component204is shown inFIG.13, where the material deposits208are shown in greater detail. The heel-side aperture216is also visible throughFIG.13, the heel-side aperture216being aligned with the upper and lower retention features166,168within the heel region88. The lattice structure136is visible inFIG.13, which is generally in the same configuration as it is within the club head40. While the foregoing description relating to the post-printed component204includes various aspects that are not shown or included within the club head40, alternative variations of the post-printed component204are contemplated that can achieve various aspects of the club head40. Once the post-printed component204is ready for sintering, the post-printed component is placed into a sintering furnace face down, i.e., with the face cavity64facing downward. In some embodiments, the post-printed component204may be sintered in an orientation other than face down. For example, the post-printed component204may be sintered sole down (i.e., with the sole54facing downward). Alternatively a sintering support (seeFIGS.67-70) may be used to support the post-printed component204in a desired rotation orientation relative to gravity. Now turning to the views ofFIGS.14-17, cross sectional views of the club head40are shown to illustrate the internal structure within the club head40. Referring specifically toFIG.14, some of the angled segments172of the lattice structure136are shown extending from an inner surface220of the insert wall206. The heel-side aperture216is also shown, and a back side222of the face insert44is visible. The angled segments172of the lattice structure136extend from upper and lower ends of the insert wall206, toward the rear portion62of the club head40. A hosel bar224is also shown, the hosel bar224being aligned with the hosel90, but being disposed entirely within the cavity130. The hosel bar224is generally aligned with the hosel90, and extends vertically between the crown52and the sole54of the body42. One of the design elements200is further shown inFIG.14with a layer of the adhesive or bonding agent126disposed intermediate the design element200and the body42. Still further, circular protrusions226are shown extending outward from the inner surface220of the insert wall206. The circular protrusions226may be disposed along the inner surface220to aid with acoustics, altering the CT of the club head40, or for another reason. Referring now toFIG.15, another cross-sectional view of the club head40is shown. The angled segments172of the lattice structure136that extend from the inner surface220of the insert wall206are shown intersecting with other angled segments172. Referring specifically to the centrally located angled segments172, these angled segments172are offset from one another, such that the angled segments172are not disposed entirely within the same plane. Various other segments172,174,176are also offset from one another, such that intersecting segments172,174,176are not disposed within the same plane as one another. Still referring toFIG.15, the upper inset region144and the lower inset region146are partially shown, along with the various alignment features156along the alignment platform154. A portion of the central alignment feature158is also shown inFIG.15. The window160is further shown, with portions of the angled segments172being visible through the window160. Referring now toFIG.16, a cross-sectional view taken through the fasteners192and the weights132,134is shown. Various segments172,174,176are shown, which intersect at varying locations. Many of the intersection points182of the segments172,174,176are defined by the rounds, which may define acute, obtuse, or right angles. The fasteners192are further shown being disposed within the fastener apertures190, and retaining the weights132,134between the upper and lower retention features166,168. While the window160is shown being see-through, it is contemplated that an insert or another feature may be positioned within the window160to prevent debris from entering into the cavity130during use of the club head40. Still further, it is contemplated that a polymer or another type of filler material (not shown) may be disposed within the head cavity130such that the material is disposed within the lattice structure136. The material may be included to add weight or modify certain characteristics of the club head40. In some embodiments, the material may be added within the head cavity130to prevent materials such as dirt or other foreign matter from becoming engaged within the head cavity130. Referring toFIG.17, a cross-sectional view taken through the central alignment feature158is shown. The design feature200along the sole54of the club head40is shown with a layer of the adhesive126disposed between the design feature200and the body42. The face insert44is also shown with a layer of the adhesive126disposed between the face insert44and the insert wall206. One of the circular protrusions226is also shown extending from the inner surface220of the insert wall206. Varying other segments172,174,176of the lattice structure136are also shown extending across varying portions of the club head40. The first weight132is visible within the background ofFIG.17. The upper inset region144and the lower inset region146are further shown, along with the upper inset edge148and the lower inset edge152. Now referring toFIGS.18and19, cross-sectional views of the club head40and the post-printed component204are shown, respectively, to illustrate contrasts between the club heads after and before post printing processing, respectively. The various material deposits208are visible inFIG.19, while the material deposits are shown having been removed, i.e., grinded down, drilled out, or otherwise machined inFIG.18. Further, the heel-side notch216inFIG.19has become the heel-side aperture216inFIG.18, which is achieved through drilling, grinding, or another type of machining process. The shaft cavity122is also shown having been drilled out or otherwise machined inFIG.18, while the shaft cavity122is shown filled-in inFIG.19. Various other differences are visible between the pre- and post-processing versions of the club head, which may be achieved through a number of manufacturing techniques known to those skilled in the art. For example, certain surfaces and corners are grinded down or otherwise machined to achieve the club head40shown inFIG.18. Referring toFIG.20, a cross-sectional view of the club head40is shown that is taken through a center of the hosel90. The hosel notches112, which do not extend all the way through the hosel90, are shown, the hosel notches112taking various different forms along the hosel90. To that end, the hosel arm214is shown extending centrally through a center of the hosel90, the hosel arm214defining the various hosel notches112that are cutout from the hosel90. The hosel notches112, in some embodiments, may be disposed in a direction that is orthogonal with respect to the orientation shown inFIG.20. The groove120that separates the front region60and the rear region62of the body42is further shown inFIG.20, the groove120being generally v-shaped in cross-section. Referring now toFIGS.21and22, the face insert44is shown in greater detail. As noted above, the face insert44defines the striking surface68, which includes the first surface74, the second surface76, the third surface78, and the fourth surface80. In this particular embodiment, Descending Loft Technology™ is utilized, which comprises four flat surfaces that are milled into the face insert44. In a preferred embodiment, each of the surfaces74,76,78,80descends in loft by 1° from a top of the face insert44to a bottom of the face insert44. As a result of this configuration, when a player's shaft is pressed at impact, the ball contact will be higher on the face insert44. The face insert44therefore delivers consistent launch angles from putt to putt, which can lead to more consistent and predictable rolls. Referring now toFIG.23, a horizontal cross-sectional view of the club head40is shown. In this view, the hosel bar224, the first weight132, and the second weight134are shown in cross section. The fasteners192are also shown in cross section, along with the lattice structure136. The rear lattice row182and the front lattice row180are also shown. The front lattice row180and the rear lattice row182define a plurality of the segments172,174,176, which extend in a wide range of directions. In some embodiments, the disposition of the one or more segments172,174,176may be modified to change one or more characteristics of the club head40, such as the CG, CT, weight distribution, or another characteristic. Still further, in some embodiments, additional lattice rows may be added, and the segments172,174,176may be disposed in alternative configurations. As provided inFIG.23, the lattice structure136is generally limited to the medial region86of the club head40, with edge portions slightly crossing over into the heel region88and the toe region84. In some embodiments, the lattice structure136may extend entirely across one or more of the toe region84, the medial region86, and the heel region88. The lattice structure136may also extend only in a region defined between the first weight132and the second weight134. In general, the additive manufacturing principles and advantages of the putter-type club head40and the corresponding post-printed component204may be applied to other types of golf club heads. For example, an iron-type golf club head may be manufactured using an additive manufacturing technique and, in some embodiments, designed to include an internal or an external lattice structure or portion. The incorporation of a lattice structure into an iron-type golf club head via additive manufacturing may provide several manufacturing and performance advantages, in addition to enabling the design of an iron-type golf club head to leverage performance benefits from various iron club head designs. For example, conventional iron-type golf club heads may generally be designed with a muscle back design, a cavity back design, or a hollow construction. Typically, these conventional iron designs are limited in CG movement due to their volume and manufacturing method (e.g., forging, casting, metal injection molding, machined, etc.). Certain players may benefit from playing a mid or large volume club head design that performs like a low volume club head. For example, hollow constructions are typically designed with a club face insert that may only be supported around a periphery of the face insert (e.g., the face insert is generally unsupported over the surface area that contacts a golf ball). Unsupported face inserts may provide inconsistent launch conditions and greater distance variability when compared to an iron design with a supported face (e.g., a muscle back design), but may provide greater distance and forgiveness. Additive manufacturing may allow for the design of a larger volume club head, which defines a higher MOI, with a supported face (e.g., similar to a low volume iron design) and the ability to adjust a CG location by adjusting mass and lattice structure locations. Referring now toFIGS.24-27, an iron-type golf club head300is shown in accordance with the present disclosure that may be formed through an additive manufacturing process. The iron-type golf club head300includes a body302that defines an external skin or shell304that encloses an internal cavity306(seeFIGS.26and27). The external shell304may be formed around an external boundary of the body302(e.g., a boundary that is externally visible). In general, the iron-type golf club head300may be formed by an additive manufacturing process to define the appearance of a hollow construction iron design (e.g., a larger volume when compared to a muscle back design), which creates extra volume (i.e., the internal cavity306) within the external shell304to manipulate club head properties and/or performance. For example, the internal cavity306may be manipulated by adding solid material, a lattice structure, a weight, leaving it hollow, or any combination thereof to create unique CG locations and mass properties to influence face flex and performance. In general, the external shell304may form a thin border around a substantial portion or an entirety of the body302to give the appearance that the iron-type golf club head300is solid when viewed externally. The internal cavity306may be formed by a boundary defined by an inner periphery of the external shell304. The iron-type golf club head300defines a toe side308, a heel side310, a front side312, a top side314, a bottom side316, and a rear side318. The body302includes a toe region320, a medial region322, and a heel region324. Referring specifically toFIGS.24and25, the toe region320, the medial region322, and the heel region324are defined by lines or planes P1and P2that extend through the iron-type golf club head300in a sole-topline direction326(e.g., a vertical direction from the perspective ofFIGS.24and25). The toe region320and the heel region324are arranged at laterally-opposing ends of the body302, and the medial region322is arranged laterally between the toe region320and the heel region324. The front side312of the body302may define a front face327that extends along the front side312of the body302from the toe region320, through the medial region322, and into at least a portion of the heel region324. In some embodiments, the front face327may define an entire front surface of the body302that extends laterally from the toe region320, through the medial region322, and into the heel region324to a junction between the front surface and a hosel344extending from the heel region324. In some embodiments, a portion of the front face327defined along the medial region322defines a striking face, which may include a plurality of laterally-extending grooves that are spaced from one another in the sole-topline direction326(seeFIG.39). The iron-type golf club head300defines a topline328extending laterally in a heel-toe direction330(e.g., a horizontal direction from the perspective ofFIGS.24and25) along the top side314, and a sole332extending laterally in the heel-toe direction330along the bottom side316. The toe region320includes a toe portion334of the body302that is defined by a portion of the body302between a distal end of the toe side308and the plane P1. In some embodiments, the plane P1may be defined along a lateral edge of the grooves (not shown) formed in the front side312that is adjacent to the toe side308. In some embodiments, the plane P1may intersect the top side314of the toe portion334at a toe-topline intersection point336along the topline328where the slope of a line tangent to the topline328is approximately zero (e.g., a point where a line tangent to the periphery of the top side314is approximately parallel to the ground at address). In these embodiments, the plane P1may extend through the toe portion334in the sole-topline direction326to a toe-sole intersection point337. The heel region324includes a heel portion338of the body302that is defined by a portion of the body302between a distal end of the heel side310and the plane P2. In some embodiments, the plane P2may be defined along a lateral edge of the grooves (not shown) formed in the front side312that is adjacent to the heel side310. In some embodiments, the plane P2may intersect the top side314at a heel-topline inflection point340(e.g., a point where the periphery of the top side314transitions from concave down to concave up). In these embodiments, the plane P2may extend through the heel portion338in the sole-topline direction326to a heel-sole intersection point342. The heel portion338includes the hosel344that extends from the heel portion338at an angle (e.g., a lie angle formed between a plane parallel to the ground on which the club head rests at address and a center axis defined through the hosel344) in a direction away from the toe portion334. The hosel344defines a hosel cavity346(seeFIG.26) within which a shaft (not shown) may be inserted for coupling to the iron-type golf club head300. In some embodiments, a ferrule (not shown) may abut or be at least partially inserted into the hosel344. In some embodiments the hosel cavity346may extend through at least a portion of the hosel344. The topline328may extend along an outer periphery of the top side314of the body302from the heel-topline inflection point340, along the medial region322, to the toe-topline intersection point336. The sole332may extend along a periphery of the bottom side316of the body302from the toe-sole intersection point337, along the medial region322, to the heel-sole intersection point342. With reference toFIGS.26-31, the internal cavity306of the body302includes a lattice structure348arranged within at least a portion of the internal cavity306. For example, in some embodiments, the lattice structure348may extend in the sole-topline direction326along the entire internal cavity306(seeFIGS.26and27). In some embodiments, the lattice structure348may extend in the sole-topline direction326along a portion of the internal cavity306. For example, the lattice structure348may extend from an end of the internal cavity306adjacent to the topline328to a location between the topline328and the sole332(seeFIGS.28and29). Alternatively, the lattice structure348may extend from an end of the internal cavity306adjacent to the sole332to a location between the sole332and the topline328(seeFIGS.30and31). In some embodiments, the lattice structure348may extend laterally in the heel-toe direction330along substantially the entire internal cavity306. For example, the lattice structure348may extend laterally in the heel-toe direction330from the toe region320, through the medial region322, and into at least a portion of the heel region324. In some embodiments, the lattice structure348may extend laterally in the heel-toe direction330a distance defined by a lateral extension of the front face327(e.g., the lattice structure348may extend the same lateral distance as the front face327). In general, the incorporation of the lattice structure348into the internal cavity306defines a lower density relative to a solid material (e.g., solid metal) filling within the internal cavity306of the same volume. Since the lattice structure348defines a lower density compared to a solid material (e.g., solid metal) filling of the same volume, a CG volume ratio defined as a ratio between a volume VLthat the lattice structure348occupies in the internal cavity306to a volume VSthat a solid portion349occupies within the internal cavity306may be altered to move the CG location in the sole-topline direction326. In other words, an orientation of the lattice structure348between the topline328and the sole332(e.g., a distance that the lattice structure348extends over the internal cavity306in the sole-topline direction326) and the volume ratio may define a CG defined by the body302. The orientation of the lattice structure348and the volume ratio may be altered to define a desired CG location for the iron-type golf club head300. With specific reference toFIGS.28-31, the arrangement, dimensions, and volume of the lattice structure348within the internal cavity306may be customized to define a high CG (e.g., a CG arranged closer to the topline328) or a low CG (e.g., a CG arranged closer to the sole332). For example, the iron-type golf club head300illustrated inFIGS.28and29may define a CG point350that is higher (e.g., closer to the topline328) when compared to a CG point352defined by the iron-type golf club head300illustrated inFIGS.30and31. This is due to the differences in the arrangement, dimensions, and volume of the lattice structure348illustrated inFIGS.28-31. For example, arranging the lattice structure348adjacent to the topline328(seeFIGS.28and29) and filling a reminder of the internal cavity306adjacent to the sole332with the solid portion349(e.g., solid metal material that is formed layer by layer) provides more high density material adjacent to the sole332and, thereby, lowers the CG of the iron-type golf club head300. Conversely, arranging the lattice structure348adjacent to the sole332(seeFIGS.30and31) and filling a reminder of the internal cavity306adjacent to the topline328with the solid portion349provides more high density material adjacent to the topline328and, thereby, raises the CG of the iron-type golf club head300. The incorporation of the lattice structure348in the internal cavity306of the iron-type golf club head300enables the CG location to be manipulated to any location between a CG defined by a completely solid body (e.g., the internal cavity306is completely filled with solid material) and a CG define by a completely hollow body (e.g., the internal cavity306is completely hollow or devoid of material). It should be appreciated that the volumes defined by the lattice structure348(VL) and the solid portion349(VS) of the internal cavity306do not need to be discretely defined along the sole-topline direction326. That is, in some embodiments, the lattice structure348may include one or more solid portions349arranged on vertically-opposing sides thereof. For example, the lattice structure348may not originate from an internal side of the external shell304adjacent to the top side314or an internal side of the external shell304adjacent to the bottom side316. Rather, the internal cavity306may include solid portions349that extend from the top and bottom internal sides of the external shell304that form the internal cavity306and the lattice structure348may be arranged between the solid portions349. Alternatively, the internal cavity306may include one or more lattice structure348that are separated in the sole-topline direction326with the solid portion349arranged therebetween. In some embodiments, the variability and control over the CG location provided by the incorporation of the lattice structure348into the iron-type golf club head300may be leveraged when designing and manufacturing a set of iron-type golf club heads. For example, a set of irons may include long irons (e.g., 1-iron through 5-iron), mid irons (e.g., 6-iron through 9-iron), short irons (e.g., pitching wedge through lob wedge), and it may be desirable to define varying CG locations for each iron within a set. In some embodiments, the various types of irons within a set may define varying CG locations (e.g., long irons define a low CG, mid irons define a middle CG, and short irons define a high CG, or another configuration). In any case, a set of iron-type golf club heads according to the present disclosure may include at least two iron-type golf club heads manufactured via an additive manufacturing process with a lattice structure incorporated in both of the iron-type golf club heads at varying CG volume ratios to define different CG locations along the sole-topline direction326for each of the iron-type golf club heads produced. In some embodiments, a set of iron-type golf club heads according to the present disclosure may include a first golf club head and a second golf club head. The first golf club head may define a first orientation of a first lattice structure between a sole and a topline and a first volume ratio between a first lattice volume and a first solid volume. The second golf club head may define a second orientation of a second lattice structure between a sole and a topline and a second volume ratio between a second lattice volume and a second solid volume. In some embodiments, the second orientation may be different than the first orientation to define a different CG between the first golf club head and the second golf club head. In some embodiments, the second volume ratio may be different than the first volume ratio to define a different CG between the first golf club head and the second golf club head. In some embodiments, the second orientation may be different than the first orientation and the second volume ratio may be different than the first volume ratio to define a different CG between the first golf club head and the second golf club head. In addition to the ability of the lattice structure348to manipulate the CG location of the iron-type golf club head300, a stiffness defined along the external shell304in the regions occupied by the lattice structure348may be maintained, for example, similar to the stiffness support provided by the solid portion349. For example, in some embodiments, the front face327of the body302is supported by (e.g., in engagement with) one of the lattice structure348and the solid portion349along an entire surface area thereof, which prevents local areas of non-uniform or reduced stiffness. In this way, for example, the incorporation of the lattice structure348into the iron-type golf club head300enables the iron-type golf club head300to provide the advantages of various iron designs to a user. For example, the iron-type golf club head300may provide the consistent launch conditions and distance variability of a low volume (e.g., muscle back) iron design and the increased MOI of a mid or high volume iron design. In some embodiments, the iron-type golf club head300may be designed to provide enhanced distance (e.g., a utility iron) and may include a lattice structure that is attached to the body but does not support the front face or a face insert coupled to the body. For example, with reference toFIGS.32and33, the iron-type golf club head300may include a face insert354that is coupled to the front side312of the body302and attached (e.g., via welding) around a periphery of the front side312. When the face insert354is coupled to the body302, the internal cavity306may be enclosed by the external shell304and the face insert354, and the lattice structure348may be enclosed within the internal cavity306. In the illustrated embodiment, the lattice structure348extends from the internal surfaces of the external shell304. For example, the lattice structure348may be attached to or supported by the internal surfaces of the external shell304on the body302adjacent to the toe side308, the heel side310, the top side314, the bottom side316, and the rear side318. The lattice structure348may be interrupted by the solid portion349that, in the illustrated embodiment, extends along the bottom side316from the toe region320to a location between the toe region320and the heel region324(seeFIG.32). When the face insert354is attached to the body302, a gap356may be formed between a termination plane T defined by the lattice structure348(e.g., a plane generally parallel to the face insert354along which the lattice structure348terminates) and the face insert354. In other words, the lattice structure348may be set back from the face insert354leaving the face insert354unsupported by the lattice structure348. The design and construction of the iron-type golf club head300illustrated inFIGS.32and33provides support for the face insert354only around the periphery thereof and creates a stiffer body structure, which allows the face insert354to be thinner, thereby enhancing performance (e.g., increased distance). In some embodiments, the face insert354may be manufactured via an additive manufacturing process. As described herein, in some embodiments, an orientation of a golf club head relative to a build plane during an additive manufacturing process may improve the quality and performance of the green part of the final post-sintering product. In configurations where a face insert is not planar and includes, for example, a portion of a sole integrated with a striking surface (i.e., an L-cup face insert), it may be beneficial to orient the face insert such that the front face or striking surface is rotationally offset from the build plane. In this way, for example, the layer lines formed during the additive manufacturing process may not pass through the edge where the front face transitions to the sole (e.g., a leading edge). That is, if the face insert were manufactured with the front face oriented parallel to the build plane, a layer line may pass through the leading edge of the face insert, which may cause defects in the green part and/or the post-sintered part. This is avoided by printing the face insert with the front face rotationally offset relative to the build plane. In addition, the rotational orientation of the face insert relative to the build plane may be tailored to maximize efficiency of the additive manufacturing process (i.e., arrange as many face inserts within a given build area to manufacture as many face inserts as possible during a build). With reference toFIGS.34and35, in some embodiments, an aperture358may be formed, for example, via additive manufacturing, that extends laterally into and through the solid portion349. The aperture358may extend laterally from the toe side308of the solid portion349to a location between the toe side308and an end of the solid portion349. In some embodiments, the aperture358may be filled with a weight bar360(e.g., tungsten). The incorporation of the weight bar360may aid in lowering the CG location of the iron-type golf club head300along the sole-topline direction326. In some embodiments, the iron-type golf club head300may be manufactured to enable the weight bar360to be attached or secured within the aperture358via a sintering process. For example, the weight bar360may be manufactured with dimensions that are a predetermined percentage larger than the factory finish dimensions. In some embodiments, the predetermined percentage may be about 10%, or about 15%, or about 20%, or about 25%, or about 30% larger than the factory finish dimensions of the iron-type golf club head. In some embodiments, the predetermined percentage may be between about 10% and about 30%, or between about 15% to about 25%, or between about 16% and about 20%. In some embodiments, the weight bar360may be manufactured via an additive manufacturing process. In some embodiments, the weight bar360may be formed by a metal injection molding process. In any case, once the weight bar360is initially manufactured with dimensions that are the predetermined percentage larger than the factory finish dimensions, the weight bar360may go through a sintering process. During the sintering process, the weight bar360may shrink to at least one of the factory finish dimensions. For example, the weight bar360may shrink to a factory finish diameter, but may still define a length that is longer than a factory finish length to enable the weight bar360to be cut to length and conform to the outer profile of the body302during post-processing. Similar to the weight bar360, the body302of the iron-type golf club head300may be manufactured with dimensions that are a predetermined body percentage larger than the factory finish dimensions. In some embodiments, the body302may be manufacturing via a binder jetting process and the predetermined body percentage may be about 10%, or about 15%, or about 20%, or about 25%, or about 30% larger than the factory finish dimensions of the iron-type golf club head. In some embodiments, the predetermined percentage may be between about 10% and about 30%, or between about 15% to about 25%, or between about 16% and about 20%. Once the body302is initially manufactured with dimensions that are larger than the factory finish dimensions, the body302may go through a sintering process. Prior to the sintering process, the post-sintered weight bar360may be inserted into the body302at the predefined location (e.g., the aperture358). The sintering process may shrink the body302to the factor finish dimensions. During the sintering process, the body302may shrink around the weight bar360and form an interference fit between the body302and the weight bar360, thereby securing the weight bar360within the body302without requiring any secondary adhesion techniques (e.g., welding, adhesive, etc.). By first sintering the weight bar360and then sintering the body302with the post-sintered weight bar360installed within the body302, the iron-type golf club head300may naturally form an interference fit between the body302and the weight bar360, which secures the weight bar360within the body302. For example, once the weight bar360is sintered, it may be substantially prevented from further shrinkage, which allows the secondary sintering of the iron-type golf club head300to shrink around the weight bar360and form a natural interference fit therebetween. In addition, this manufacturing process avoids issues that may arise due to sintering a golf club head that includes metals with different densities. For example, if the weight bar360and the body302were sintered for the first time together, the weight bar360would shrink more than the body302due to increased density relative to the body302. As such, the weight bar360may not fit within the body302post-sintering and add inefficiencies to the manufacture of the iron-type golf club head300. Further, the staged sintering process avoids issues that arise due to different metals requiring different sintering temperatures. In general, this staged sintering process may be used to couple a weight bar to a body of a golf club head as long as the weight bar and a cavity within which the weight bar is to be arranged define a similar or the same shape. In some embodiments, rather than a weight insert, the density of a golf club head according to the present disclosure may be controlled by the additive manufacturing process. For example, in a DMLS process, a speed at which the laser translates over a component and creates a layer is proportional to a density of the metal formed. As such, a speed at which the laser translates over selective portions when manufacturing a golf club head layer by layer may be controlled to define a desired density profile over the entire volume of the golf club head. In the embodiment ofFIGS.34and35, the laser may be slowed down when traversing over portions of the iron-type golf club head300within the solid portion349. In this way, for example, the solid portion349may include at least a portion thereof that defines a higher density and aids in lowering the CG of the iron-type golf club head300, similar to the weight bar360. In some embodiments, the iron-type golf club head300may be designed to incorporate a lattice structure that is attached only behind the front face and does not support a remainder of the body. With reference toFIG.36, the iron-type golf club head300may include the lattice structure348arranged on a rear surface of the front face327. A thickness (e.g., a distance that the lattice structure348extends away from the front face327along a direction parallel to a normal defined by the front face327) defined by the lattice structure348may be dimensioned such that lattice structure348only engages the front face327and the remainder of the body302may be unsupported by the lattice structure348. In other words, a gap362may be arranged between the lattice structure348and a rear portion364of the body302(e.g., a portion of the body302arranged reward of the front face327), which may be fabricated from solid material (e.g., solid metal that is formed layer by layer), along an entire area defined by the front face327. In this way, for example, the stiffness of the front face327may be increased (e.g., when compared to a front face/face insert without a lattice structure connected thereto). In some embodiments, for example, the lattice structure348may be arranged over a portion of the rear surface of the front face327, rather than an entirety of the rear surface. The increased stiffness provided by the lattice structure348being attached to the front face327may provide more consistent launch conditions and improved distance variability similar to a low volume (e.g., muscle back) iron design. In addition, a shape, size, and mass distribution in the rear portion364may be easily tailored or customized via additive manufacturing to allow for variations in CG location, MOI, etc. As described herein, the size, shape, volume, and arrangement of the lattice structure348within the body302of the iron-type golf club head300may be controlled or designed to provide stiffness to selective portions of the body302, the front face327, and/or the face insert354. With the lattice structure348acting as a local stiffening structure, the location of the lattice structure348within the body302may directly impact performance of the iron-type golf club head300(e.g., sound, feel, ball speed, distance variability, launch conditions, etc.). In some embodiments, the stiffness differences in the front face327provided by the support or lack thereof by the lattice structure348may be leveraged to produce a set of iron-type golf club heads with varying face stiffness. Similar to conventional iron-type golf club sets that transition from cavity back/hollow construction to muscle back design as they transition from long irons to short irons, the design of the iron-type golf club head300may be varied using additive manufacturing to provide varying performance characteristics as the iron-type golf club heads transition from long irons to short irons. For example, a set of iron-type golf club heads according to the present disclosure may include at least two iron-type golf club heads that transition from a front face or face insert that is not supported by a lattice structure to a front face or face insert that is at least partially supported by a lattice structure to leverage the performance benefits of these different designs described herein in a single set of iron-type golf club heads. As described herein, there are several performance and design advantages to incorporating a lattice structure into an iron-type golf club head, or another type of golf club head, via additive manufacturing. In order to effectively manufacture the iron-type golf club head according to the present disclosure certain design aspects should be considered. For example, many additive manufacturing processes utilize a metal powder bed to produce components layer by layer, as described herein. Similar to the putter-type golf club head40, iron-type golf club heads may be required to be de-caked of residual metal powder that remains after the initial scavenging of the printed component from the powder bed. In general, an iron-type golf club head according to the present disclosure may define a flow path that extends through the body to allow a fluid (e.g., gas) to be forced through or sucked out of the body. In some embodiments, the flow path may be formed via apertures or slots formed in the body and may extend through a lattice structure. Referring toFIGS.37and38, in some embodiments, the body302may define a flow path366that extends along the internal cavity306and the hosel cavity346. Specifically, the lattice structure348may be formed by a plurality of segments that form a plurality of cutouts, or absences of material, between the plurality of segments. In this way, for example, fluid flow may occur through the lattice structure348. In some embodiments, the lattice structure348may include shapes or surfaces that define one or more cutouts, or absences of material, to enable fluid flow therethrough. The internal cavity306, including the lattice structure348formed therein, may be in fluid communication with the hosel cavity346and at least one other aperture or slot formed in the body302. For example, with specific reference toFIG.38, a slot368may be formed in the rear side318of the body302that extends laterally across the body302. In this embodiment, the flow path366may extend from the hosel cavity346, along the lattice structure348, and through the slot368to define a flow path that extends through the body302. In this way, for example, pressurized fluid (e.g., gas), a vacuum, a brush, a tool, or gravity may be applied to the flow path366to aid in removing powdered metal and excess material from the additive manufacturing process (i.e., de-caking). In some embodiments, the iron-type golf club head300may not include the slot368and, rather, may include an aperture (not shown) formed, for example, in the toe portion334. The aperture (not shown) formed in the toe portion334may extend into the internal cavity306to provide fluid communication with the lattice structure348. The aperture (not shown) may be utilized after manufacturing the body302via an additive manufacturing process to provide compressed fluid (e.g., gas) or a vacuum to the flow path366to aid in removing powdered metal and excess material. After the leveraging the flow path366for the de-caking process, the aperture may be plugged, for example, by a screw or a plug to prevent debris from entering the internal cavity306during use. In some applications, the arrangement and number of openings that form a flow path may be varied dependent on the type additive manufacturing process being used to form a golf club head. For example, in an additive manufacturing process where the manufactured part defines a density that is close to a solid metal part (e.g., SLM, DMLS, etc.), the number of openings in a flow path may be reduced when compared to an additive manufacturing process where the manufactured part defines a lower density and higher porosity (e.g., binder jetting). The lower density and high porosity defined by the green part after a binder jetting process may be susceptible to damage if high pressure fluid is used to remove excess metal powder from the part. For example, blowing the metal powder over the green part after a binder jetting process may act like a sand blaster and affect the quality of the green part. For these reasons, it may be beneficial to include at least two openings in a flow path for a golf club head manufactured using, for example, a binder jetting process. In any case, a golf club head manufactured using an additive manufacturing process may be designed to include at least one opening into a flow path from with excess material may be removed from the manufactured part. As described herein, an iron-type golf club head according to the present disclosure may be manufactured using a binder jetting process, an SLM, a DMLS additive manufacturing process, or another direct laser metal melting process. In DMLS, for example, support structures are leveraged to attach the component being manufactured to a build plate and to protect against warping/distortion that may occur due to the high temperatures utilized during the additive manufacturing process. In some instances, when a lattice structure is created by an additive manufacturing process (e.g., DMLS), it may need support structures during printing. It is advantageous to avoid creating support structures because they are difficult to remove, especially from internal cavities and overhangs. The necessity for support structures is dependent on the additive manufacturing process, orientation of the lattice structure, and design of the lattice within the club head. In some embodiments, a golf club head manufactured using an additive manufacturing process according to the present disclosure may include a lattice structure that is self-supporting and does not require internal supports to be created. In general, print orientation (i.e., the orientation of a build plane along which the golf club head is formed layer by layer) relative to lattice structure design can ensure that the lattice structure is self-supporting. Referring toFIGS.39and40, a second plane or build plane B may be defined as a plane along which the iron-type golf club head300is printed layer by layer during the additive manufacturing process. In the illustrated embodiment, the build plane B is rotationally offset from a first plane or ground plane G defined by the body302and that is arranged parallel to the ground on which the iron-type golf club head300may be placed at address. As described herein, when manufacturing a golf club head via an additive manufacturing process, it is beneficial to ensure that the layer lines created during the additive manufacturing process avoid sharp surface interfaces (e.g., corners, edges, etc.) that fall along layer line edges. To leverage the benefits of avoiding sharp surface interfaces that fall along layer line edges, the build plane B may be rotationally offset from the ground plane G, when viewed from the toe side308(seeFIG.39) or the heel side310, which results in the iron-type golf club head300being be printed layer by layer at an angle that is offset by about 30 degrees with respect to the ground plane G (e.g., 30 degrees clockwise from the perspective ofFIG.39). In some embodiments, the iron-type golf club head300may be printed along a build plane B that is offset at an angle of between about 0 degrees and about 175 degrees, or between about 5 degrees and 160 degrees, or between about 5 and about 140 degrees, or between about 5 degrees and 120 degrees, or between about 5 degrees and 90 degrees, or 5 degrees and about 60 degrees, or between about 10 degrees and about 50 degrees, or between about 20 degrees and about 40 degrees with respect to the ground plane G. The lattice structure348may define one or more lattice build angles relative to the build plane B. Each of the lattice build angles is defined along a common plane defined by the lattice structure348. For example, the lattice structure348may be formed by a plurality of segments370that extend from an internal boundary of the internal cavity306to either another internal boundary or the solid portion349. In the illustrated embodiment, the internal cavity306may be formed by an internal sole surface372, an internal rear surface374, an internal front surface376, and an internal top surface378. The internal top surface378is formed by the interface between the solid portion349and the lattice structure348. In the illustrated embodiment, the lattice structure348defines a plurality of planes along which the plurality of segments370extend. With specific reference toFIG.39, the lattice structure348defines a lattice plane L1that forms a lattice build angle A1with respect to the build plane B, and a lattice plane L2that forms a lattice build angle A2with respect to the build plane B. Each of the lattice planes L1, L2is formed by a portion of the plurality of segments370that are aligned and oriented at the respective lattice build angle A1, A2relative to the build plane B. The lattice structure348includes a plurality of portions that align with the lattice planes L1, L2, which are spaced from one another a distance that is governed by the length and orientation of the plurality of segments370within the lattice structure348. With specific reference toFIG.40, the lattice structure348defines a lattice plane L3that forms a lattice build angle A3with respect to the build plane B, and a lattice plane L4that forms a lattice build angle A4with respect to the build plane B. Each of the lattice planes L2, L3is formed by a portion of the plurality of segments370that are aligned and oriented at the respective lattice build angle A3, A4relative to the build plane B. The lattice structure348includes a plurality of portions that align with the lattice planes L3, L4, which are spaced from one another a distance that is governed by the length and orientation of the plurality of segments370within the lattice structure348. In general, the sequential spacing and intersection between each of the lattice planes L1, L2, L3, L4forms the geometry of the lattice structure348. Through testing, it has been determined that when the build plane B is oriented parallel to a normal extending from the front face327, the lattice structure348is self-supporting with lattice build angles A1, A2, A3, A4that are each greater than or equal to 30 degrees. That is, if each of the lattice build angles A1, A2, A3, A4is greater than or equal to 30 degrees, the lattice structure348may be additively manufactured without any additional support structures, for example, during DMLS. In this way, for example, the need to remove support structures on the lattice structure348during the post-processing stages may not be required, which significantly improves manufacturing efficiency, costs, and time. In the illustrated embodiment, each of the lattice planes L1, L2, L3, L4extend in varying directions and form a plurality of intersection points380where one or more of the plurality of segments370that form the lattice planes L1, L2, L3, L4intersect. In the illustrated embodiment, each of the intersection points380may be formed by the intersection of six of the segments370extending from the intersection point380in a different direction (seeFIG.41), except at locations where the intersection point380is interrupted by one or more of the internal sole surface372, the internal rear surface374, the internal front surface376, and the internal top surface378(or another surface in engagement with the lattice structure348), or a termination plane along which the lattice structure348terminates prior to engaging a surface (seeFIG.33). With specific reference toFIG.41, in one embodiment, the lattice structure348may define a unit cell382that is formed by a cutout, air space, or absence of material defined between interconnected intersection points380that occur along a common plane. For example, in the illustrated embodiment, the lattice structure348may define square-, rectangular-, or diamond-shaped unit cells382. This geometry defined by the unit cells382may be similar to the lattice structure348illustrated inFIGS.26-40. However, a lattice structure according to the present disclosure is not limited to this shape of unit cell and alternative geometries may be utilized. For example, as described herein, the segments172,174,176of the lattice structure136define generally triangular-shaped cutouts or air spaces. Alternatively or additionally, in some embodiments, at least a portion of the unit cells in a lattice structure according to the present disclosure may define a pentagonal shape, a hexagonal shape, or any other polygonal shape. In some embodiments, a unit cell defined by a lattice structure according to the present disclosure can be formed by interconnected shapes (e.g., ovals, circles, or another geometric shape) with varying orientation to form a repeated pattern, or unit cell. In some embodiments, a lattice structure according to the present disclosure may be formed by a differential geometry structure. For example, a lattice structure according to the present disclosure may be formed by a gyroid structure that includes a plurality of interconnected, periodic minimal surfaces. The gyroid structure may define a unit cell that is repeated in a pattern over a desired volume to form a lattice structure according to the present disclosure. In general, the use of a differential geometry structure (e.g., a gyroid) may reduce stress concentrations formed along the lattice structure due to the reduction in sharp edges formed on the lattice structure, which may provide similar advantages as adding curvature, described herein with reference to the lattice structure136. In some embodiments, a lattice structure according to the present disclosure may define a tublane structure or a plate-lattice structure. Regardless of the design and properties of the lattice structure, a golf club head according to the present disclosure may be manufactured via additive manufacturing to include a lattice structure formed integrally with at least a portion of a body, a front face, and/or a face insert of the golf club head. During manufacture, when the build plane is oriented parallel to the front face normal, each portion of the lattice structure may be printed at an angle greater than or equal degrees relative to the build plane to ensure that the lattice structure is self-supporting and does not require support structures. In some embodiments, a lattice structure according to the present disclosure may define a hybrid or variable structure that varies in one or more of unit cell type, unit cell geometry, unit cell size, segment length, segment, thickness, segment volume, and unit cell density at one or more locations along the lattice structure. For example, in embodiments of the iron-type golf club head300where the lattice structure348is connected to the front face327, the lattice structure348may be varied behind the front face327to improve or maximize ball speed over the front face327, more specific to where players actually impact the golf ball (e.g., lower (closer to the sole) than a face center point). For example, the lattice structure348may vary in a thickness, size, and/or shape of the segments370, a density of the unit cells382, and/or a shape or type of the unit cells382at various locations behind the front face327. As described herein, adding curvature or removing sharp edges within geometry that is formed through additive manufacturing solves several issues, including: helping with de-caking (e.g., helps against green part destruction when blowing air against a lattice structure), reducing sintering drag, and avoiding stress concentrations in a lattice structure. In the embodiments where a lattice structure according to the present disclosure is formed via a plurality of segments, the intersection points may be curved at each intersection between the segments at the intersection point. For example,FIG.42illustrates an embodiment of an intersection point380taken along a cross-sectional plane. As illustrated inFIG.42, the lattice structure348may define rounded edges at each intersection between the segments370forming the intersection point380. That is, the intersecting edge formed between each of the intersecting segments370may be rounded to define a curvature or a radius of curvature, rather than culminating at a point. In addition to the intersection points380, each edge of the segments370formed in the lattice structure348may define a rounded or curved edge. In general, a lattice structure according to the present disclosure may define rounded or curved edges along, for example, edges of intersection points, edges of segments forming the lattice structure, and any other edges formed along the lattice structure to provide the manufacturing and performance benefits described herein. In the embodiments ofFIGS.26-40, the lattice structure348is arranged internally with respect to the body302(e.g., at least partially within the internal cavity306). In some embodiments, a golf club head may be designed to include an externally accessible/visible lattice structure. For example, a golf club head according to the present disclosure may include at least one external face that is formed at least partially by a lattice structure. As described herein, removing residual metal powder may be required following the manufacture of a golf club head via an additive manufacturing process. In some embodiments, a golf club head according to the present disclosure may include apertures and/or define a flow path to enable the removal of excess metal powder. Another solution to aiding in removal of metal powder from a 3D printed golf club head may be to arrange the lattice structure such that it is externally accessible/visible. In some embodiments, a depth that an externally-facing lattice structure extends into a body of the golf club head and/or a unit cell size (e.g., volume or surface area) of the lattice structure may be limited to ensure efficient de-caking of residual metal powder present after manufacturing the golf club head via an additive manufacturing process. Referring toFIGS.43and44, an iron-type golf club head400is shown in accordance with the present disclosure that may be formed through an additive manufacturing process. The iron-type golf club head400includes a body402and an externally-facing lattice structure404formed on the body402. In general, the lattice structure404may be formed along a portion of an externally-facing face or surface of the body402in place of solid material, which reduces a weight of the iron-type golf club head400and maintains stiffness (e.g., similar to the stiffness provided by solid material). The iron-type golf club head400defines a toe side408, a heel side410, a front side412, a top side414, a bottom side416, and a rear side418. The body402includes a toe region420, a medial region422, and a heel region424. The toe region420, the medial region422, and the heel region424are defined by lines or planes P1and P2that extend through the iron-type golf club head400in a sole-topline direction426(e.g., a vertical direction from the perspective ofFIG.43). The toe region420and the heel region424are arranged at laterally-opposing ends of the body402, and the medial region422is arranged laterally between the toe region420and the heel region424. The front side412of the body402may define a front face427that extends along the front side412of the body402from the toe region420, through the medial region422, and into to at least a portion of the heel region424. In some embodiments, the front face427may define an entire front surface of the body402that extends laterally from the toe region420, through the medial region422, and into the heel region424to a junction between the front surface and a hosel444extending from the heel region424. In some embodiments, a portion of the front face427defined along the medial region422defines a striking face, which may include a plurality of laterally-extending grooves (not shown) that are spaced from one another in the sole-topline direction426. The iron-type golf club head400defines a topline428extending laterally in a heel-toe direction430(e.g., a horizontal direction from the perspective ofFIG.43) along the top side414, and a sole432extending laterally in the heel-toe direction430along the bottom side416. The toe region420includes a toe portion434of the body402that is defined by a portion of the body402between a distal end of the toe side408and the plane P1. In some embodiments, the plane P1may be defined along a lateral edge of the grooves (not shown) formed in the front side412that is adjacent to the toe side408. In some embodiments, the plane P1may intersect the top side414of the toe portion434at a toe-topline intersection point436along the topline428where the slope of a line tangent to the topline428is approximately zero (e.g., a point where a line tangent to the periphery of the top side414is approximately parallel to the ground at address). In these embodiments, the plane P1may extend through the toe portion434in the sole-topline direction426to a toe-sole intersection point437. The heel region424includes a heel portion438of the body402that is defined by a portion of the body402between a distal end of the heel side410and the plane P2. In some embodiments, the plane P2may be defined along a lateral edge of the grooves (not shown) formed in the front side412that is adjacent to the heel side410. In some embodiments, the plane P2may intersect the top side414at a heel-topline inflection point440(e.g., a point where the periphery of the top side414transitions from concave down to concave up). In these embodiments, the plane P2may extend through the heel portion438in the sole-topline direction426to a heel-sole intersection point442. The heel portion438includes the hosel444that extends from the heel portion438at an angle (e.g., a lie angle formed between a plane parallel to the ground on which the club head rests at address and a center axis defined through the hosel444) in a direction away from the toe portion434. The hosel444defines a hosel cavity (not shown) within which a shaft (not shown) may be inserted for coupling to the iron-type golf club head400. In some embodiments, a ferrule (not shown) may abut or be at least partially inserted into the hosel444. The topline428may extend along an outer periphery of the top side414from the heel-topline inflection point440, along the medial region422, to the toe-topline intersection point436. The sole432may extend along a periphery of the bottom side416from the toe-sole intersection point437, along the medial region422, to the heel-sole intersection point442. The lattice structure404of the iron-type golf club head400may be designed and manufactured with similar properties and characteristics as the lattice structures disclosed herein. In the illustrated embodiment, the lattice structure404may define at least a portion of a rear face446of the body402. The rear face446may extend over at least a portion of the rear side418of the iron-type golf club head400. For example, the lattice structure404may extend laterally (e.g., in the heel-toe direction430) over the medial region422and at least a portion of each of the toe region420and the heel region424. The lattice structure404may extend along the sole-topline direction426between a rear-topline edge448and a rear-sole edge450. Referring specifically toFIG.44, the lattice structure404may define an external border452of the body402along the rear face446. In some embodiments, the external border452may define an externally-facing border of the lattice structure404(e.g., a border of the lattice structure404that is externally visible/accessible). The lattice structure404may define a thickness454that the lattice structure404extends into the body402, for example, in a direction arranged generally normal to a rear surface455defined by the body402. In some embodiments, the thickness454may be about 5 millimeters. In some embodiments, the thickness454may be between about 4 millimeters and about 6 millimeters, or between about 3 millimeters and about 7 millimeters. In some embodiments, the thickness454may be less than or equal to about 5 millimeters. In some embodiments, the thickness454defined by the lattice structure404, in combination with the lattice structure404defining the external border452of the body402, may enable the lattice structure404to be easily de-caked after printing of the iron-type golf club head400. In the illustrated embodiment, the lattice structure404may include unit cells that define a generally triangular shape. In some embodiments, the lattice structure404may define unit cells of any shape or design according to the present disclosure. In some embodiments, a size and shape of the unit cells defined by the lattice structure404also be customized to ensure that the de-caking process occurs efficiently. Referring toFIG.45, after the iron-type golf club head400is manufactured via an additive manufacturing process, the excess metal powder may be easily removed from the externally-accessible lattice structure404, and the lattice structure404may be filled with a filler material456. In some embodiments, the filler material456may be a light weight (e.g., low density) epoxy or resin. In some embodiments, the filler material456may by substantially transparent or translucent. Filling the lattice structure404with the filler material456efficiently prevents debris from collecting in the lattice structure404and, in some embodiments, may maintain the external visibility of at least the external border452of the lattice structure404. As described herein, incorporating a lattice structure into a golf club head provides several manufacturing, performance, and customizable advantages. In some embodiments, a lattice structure may be utilized to efficiently distribute the mass throughout a golf club head. For example, in conventional golf club heads, solid material present above a horizontal plane (e.g., a plane that extends in the heel-toe direction) defined by the CG is inefficient, since it limits movement of the CG. In some embodiments, a golf club head according to the present disclosure may replace the solid material rearward of the front face and above a CG plane defined by the golf club head with a lattice structure. In this way, the stiffness provided by the solid material may be maintained by the lattice structure, and the replacement of the solid material with the lattice structure reduces a density in the replaced areas, which allows the saved mass to be used elsewhere on the golf club head to improve performance. Referring toFIG.46, an iron-type golf club head500is shown in accordance with the present disclosure. The iron-type golf club head500may define a cavity back design and may be fabricated from solid material (e.g., solid metal). The iron-type golf club head500defines a solid CG plane C (i.e., a plane that extends in a heel-toe direction that aligns with a CG defined by a solid configuration of the iron-type golf club head500) that extends laterally across a body502of the iron-type golf club head500. According to embodiments of the present invention, the solid material arranged rearward of a front face (e.g., a striking face) and above (e.g., upward from the perspective ofFIG.46) the solid CG plane C on the body502of the iron-type golf club head500may be replaced by a lattice structure (seeFIG.47). In some embodiments, the solid CG plane C may be defined as a plane extending parallel to the ground plane G at a location defined by the CG of the body502when the body502is fabricated from solid material. Referring now toFIGS.47-50, the iron-type golf club head500may include an externally accessible/visible lattice structure504. The iron-type golf club head500defines a toe side508, a heel side510, a front side512, a top side514, a bottom side516, and a rear side518. The body502includes a toe region520, a medial region522, and a heel region524. The toe region520, the medial region522, and the heel region524are defined by lines or planes P1and P2that extend through the iron-type golf club head500in a sole-topline direction526(e.g., a vertical direction from the perspective ofFIG.47). The toe region520and the heel region524are arranged at laterally-opposing ends of the body502, and the medial region522is arranged laterally between the toe region520and the heel region524. The front side512of the body may define a front face527that extends along the front side512of the body502from the toe region520, through the medial region522, and into to at least a portion of the heel region524. In some embodiments, the front face527may define an entire front surface of the body502that extends laterally from the toe region520, through the medial region522, and into the heel region524to a junction between the front surface and a hosel544extending from the heel region524. In some embodiments, a portion of the front face527defined along the medial region522defines a striking face, which may include a plurality of laterally-extending grooves (not shown) that are spaced from one another in the sole-topline direction526(seeFIG.50). The iron-type golf club head500defines a topline528extending laterally in a heel-toe direction530(e.g., a horizontal direction from the perspective ofFIG.47) along the top side514, and a sole532extending laterally in the heel-toe direction530along the bottom side516. The toe region520includes a toe portion534of the body502that is defined by a portion of the body502between a distal end of the toe side508and the plane P1. In some embodiments, the plane P1may be defined along a lateral edge of the grooves (not shown) formed in the front side512that is adjacent to the toe side508. In some embodiments, the plane P1may intersect the top side514of the toe portion534at a toe-topline intersection point536along the topline528where the slope of a line tangent to the topline528is approximately zero (e.g., a point where a line tangent to the periphery of the top side514is approximately parallel to the ground at address). In these embodiments, the plane P1may extend through the toe portion534in the sole-topline direction526to a toe-sole intersection point537. The heel region524includes a heel portion538of the body502that is defined by a portion of the body502between a distal end of the heel side510and the plane P2. In some embodiments, the plane P2may be defined along a lateral edge of the grooves (not shown) formed in the front side512that is adjacent to the heel side510. In some embodiments, the plane P2may intersect the top side514at a heel-topline inflection point540(e.g., a point where the periphery of the top side514transitions from concave down to concave up). In these embodiments, the plane P2may extend through the heel portion538in the sole-topline direction526to a heel-sole intersection point542. The heel portion538includes the hosel544that extends from the heel portion538at an angle (e.g., a lie angle formed between a plane parallel to the ground on which the club head rests at address and a center axis defined through the hosel544) in a direction away from the toe portion534. The hosel544defines a hosel cavity (not shown) within which a shaft (not shown) may be inserted for coupling to the iron-type golf club head500. In some embodiments, a ferrule (not shown) may abut or be at least partially inserted into the hosel544. The topline528may extend along an outer periphery of the top side514from the heel-topline inflection point540, along the medial region522, to the toe-topline intersection point536. The sole532may extend along a periphery of the bottom side516from the toe-sole intersection point537, along the medial region522, to the heel-sole intersection point542. With specific reference toFIGS.47-49, the lattice structure504of the iron-type golf club head500may be designed and manufactured with similar properties and characteristics as the lattice structures disclosed herein. In the illustrated embodiment, the lattice structure504may include unit cells that define a generally triangular shape. In some embodiments, the lattice structure504may define unit cells of any shape or design according to the present disclosure. The lattice structure504extends over a portion of the body502that is arranged above the solid CG plane C (e.g., in a direction from the sole532toward the topline528) and rearward (e.g., in a direction from the front side512toward the rear side518, or to the left from the perspective ofFIG.49) of the front face527. For example, the rear surface546of the front face527may extend along a plane R at an angle relative to the sole-topline direction526, which is defined by the loft of the iron-type golf club head500. The plane R along which the rear surface546extends may intersect with the solid CG plane C and the plane R and the solid CG plane C may define the boundaries of the lattice structure504. By replacing solid material with the lattice structure504, the density defined by the body502in these regions may be locally reduced and the stiffness previously provided by the solid material may be maintained. In this way, for example, the CG of the iron-type golf club head500may be lowered (e.g., moved in a direction toward the sole532) compared to a golf club head made from solid material (i.e., relative to the solid CG plane C). For example, a CG volume ratio defined as a ratio between a volume VLthat the lattice structure504occupies to a volume VSthat solid material occupies may be a factor in defining a CG location in the sole-topline direction526. In addition, the mass removed by the lattice structure504may be redistributed to other locations on the body502to improve performance, as desired. For example, if a mass of a golf club head is maintained and the solid material above a solid CG plane and rearward of the front face is replaced by a lattice structure, the reduced density provided by the lattice structure may enable mass to be redistributed to other regions of the golf club. In some embodiments, it may be desirable to lower a CG defined by a factory finished golf club head, when compared to a solid-material golf club head. In this embodiment, the mass saved by incorporating the lattice structure may be redistributed toward the sole of the golf club head. Redistributing this weight may further lower the CG of the golf club head and this process may repeat until the CG and the redistribution of the saved mass replaced by a lattice structure converge. That is, the golf club head may continue to be replaced with lattice structure in design, until the amount of volume replaced by a lattice structure and the redistributed mass converge on a CG location. Thus, the replacement of the solid material in a golf club head may be an iterative process in design and the final finish product may be produced with a CG that balances volume replaced by lattice structure and redistributed mass. Referring toFIGS.51and52, in some embodiments, the lattice structure504may be externally visible and form at least a portion of the topline528. In some instances, a golfer may not wish to view the lattice structure504at address (i.e., along the topline528). Referring toFIGS.53and54, in some embodiments, the topline528may be formed by solid material, for example, by a topline protrusion548that extends laterally along the topline528. As described herein, weight distribution (e.g., CG manipulation) in a golf club head may be manipulated via additive manufacturing processes. In some embodiments according to the present disclosure, a golf club head may be manufactured layer by layer to include a cavity within a generally solid portion of a golf club head. During manufacture, the cavity may be filled with a plug or weight that is not permanently bound or attached to the internal surfaces of the cavity. As such, the plug or weight may be held in place by the surrounding metal powder in the cavity but not attached to the surfaces that form the cavity. That is, the plug or weight may be arranged free-floatingly within the cavity. In this way, for example, once the metal powder is removed from the cavity, a position of the plug or weight within the cavity may be manipulated to distribute the weight of the plug at a desired location within the cavity. For example, an orientation of the golf club head may be manipulated and gravity may be used to alter a position of the plug or weight within the cavity. The position of the plug or weight within the cavity may be secured, for example, by filling the cavity with a filler material (e.g., a plastic resin, a foam material, etc.). In general, the plug or weight arranged within the cavity may generally define any shape or structure that defines a weight that may be manipulated to alter a weight distribution within the golf club head. In some embodiments, the plug or weight may be fabricated from the same material as the surrounding solid portion of the golf club head. In some embodiments, the plug or weight may be fabricated from a material that is different than a material that is used to fabricate the surrounding solid portion of the golf club head. In some embodiments, the plug or weight may be fabricated from a material with a higher density than a material that is used to fabricate the surrounding solid portion of the golf club head. In some embodiments, the plug or weight may be fabricated from a material with a lower density than a material that is used to fabricate the surrounding solid portion of the golf club head. FIG.55illustrates one embodiment of a portion of a golf club head600that includes a cavity602within which a plug or weight604is formed during an additive manufacturing process. In some embodiments, the portion of the golf club head600is a portion of a body that is desired to be formed of solid material. During additive manufacture of the portion of the golf club head600, the layer by layer forming of the portion of the golf club head600enables the formation of the cavity602and the plug604within the cavity602. The plug604may be manufactured such that the plug604is spaced from the internal surfaces that form the cavity602. In the illustrated embodiment, the plug604is surrounded by residual metal powder606. The metal powder606may hold the plug604in place within the cavity602, while maintaining the detachment between the plug604and the internal surfaces of the cavity602. One or more ports608may be in communication with the cavity602to enable the removal of the metal powder606after the portion of the golf club head600is manufactured. In the illustrated embodiment, the portion of the golf club head600includes two ports608arranged at opposing sides of the cavity602. In some embodiments, the portion of the golf club head600may include more or less than two ports608arranged in any orientation that connects to the cavity602. After the portion of the golf club head600is manufactured, pressurized fluid (e.g., gas), a vacuum, a brush, a tool, or gravity may be applied to the one or more ports608to remove the excess metal powder606surrounding the plug604. As illustrated inFIG.56, once the metal powder606is removed, the plug604may be free to move within the cavity602. In this way, for example, a position of the plug604may be manipulated to alter a weight distribution within the portion of the golf club head600. In the illustrated embodiment, the cavity602and the weight or plug604defines a generally cylindrical shape. In some embodiments, the cavity602and the weight or plug604may define any shape (e.g., rectangular, polygonal, or any other 3-D shape) as required by the shape and structure defined by the portion of the golf club head600within which the cavity602is arranged. For example, in some embodiments, the cavity602and the plug or weight604may define similar shapes. In some embodiments, the weight or plug604may define a different shape than the cavity602as long as the weight or plug604is capable of displacing with in the cavity602, once the excess material is removed from the cavity602. In the illustrated embodiment, the design and shape of the cavity602and the plug604enable the weight distribution to be moved in a lateral direction (e.g., left and right from the perspective ofFIG.56). In some embodiments, the design and shape of the cavity602and the plug604may be altered to enable the weight distribution within the portion of the golf club head600to be moved in any direction, as desired. For example, the cavity602and the plug604may be designed to allow for the weight distribution to me moved in a heel-toe direction, a sole-topline direction, an oblique direction, and/or between a front face and a rear face within the portion of the golf club head600. Moving the weight distribution, via movement of the plug604to a desired location may alter the performance characteristics of a golf club head, for example, by moving the CG and/or placing more weight in a heel or a toe of the golf club head. Once the plug604is positioned in a location within the cavity602according to a desired weight distribution, the cavity602may be filled with a low-density filler material to secure the position of the plug604within the cavity602. For example, the low-density filler material may be a plastic material, a resin material, and/or a foam material. In some embodiments, binder material may be selectively added around solid portions of a golf club head to form a border or shell surrounded by metal powder. Then, during the sintering post-processing stage the metal powder enclosed within the border may solidify forming the appropriate solid portion of a golf club head. In this way, for example, use of a binder during a binder jetting process may be reduced while printing golf club heads, thereby improving manufacturing costs and efficiency. As described herein, at least a portion of a golf club head that is manufactured using an additive manufacturing process may include a solid portion (e.g., a volume region that is intended to be solid metal in the factory finish part). In some embodiments, an additive manufacturing process according to the present disclosure may improve efficiency and quality of the manufactured part by forming a boundary that includes at least one layer around a portion of a golf club head and post-processing the portion of the golf club head form the portion of the golf club head within the boundary as a solid portion. For example, as described herein, the material deposit208formed on the post-printed component204and the solid portion349on the iron-type golf club head300may be formed from solid material (e.g., solid metal). In some embodiments, these solid material portions on a golf club head may be formed using an additive manufacturing process by printing a boundary that includes at least one layer of printed material and surrounds a volume of unprinted material (e.g., metal powder). For example, the material deposit208or the solid portion349may be formed by printing a boundary that encloses a volume and is formed by at least one layer during an additive manufacturing process. The volume enclose by the boundary may be filled with powdered metal and, thereby, may be constrained (i.e., cannot move) within the volume. The manufactured golf club head may then by sintered, which transitions the powdered metal enclosed within the volume to solid material (e.g., solid metal). By only requiring at least one layer of material to form a solid volume on a golf club head, the amount of time, binder material (e.g., for a binder jetting process), and/or power (e.g., for a SLM or a DMLS process) may be reduced, which may provide reduced costs and increase efficiency during the additive manufacturing process. FIGS.57-59illustrate embodiments of a cross-section of a solid volume of a golf club head that is manufactured during an additive manufacturing process. In the illustrated embodiments, a solid volume700includes a boundary702. In some embodiments, the boundary702may be formed by at least one layer that is created during an additive manufacturing process. In some embodiments, the boundary702may be formed by at least two, at least three, at least four, or five or more layers during an additive manufacturing process. The boundary702may enclose the solid volume700and the powdered metal arranged within the solid volume700. The powdered metal enclosed by the boundary702may be maintained or supported by the boundary702(i.e., prevented from displacing after the additive manufacturing process), once the boundary702is fully formed. With the boundary702fully formed, the powdered metal enclosed therein may be formed into solid metal via a sintering process. In some embodiments, forming solid metal portions in a golf club head via sintering powdered metal enclosed by a boundary may produce higher densities when compared to solid metal portions that are formed completely layer by layer. In this way, for example, the cost and efficiency of the additive manufacturing process may be improved for creating a golf club head and the quality of the manufactured part may be improved. As illustrated inFIGS.57-59, the cross-sectional shapes of the solid volumes700may take various shapes and sizes. In the illustrated embodiments, the boundary702formed around the solid volume700may be a rectangular, a round, or an oval shape. In some embodiments, the boundary702and/or the solid volume700may take any shape or size that is required by the desired factory finish golf club head. For example, any solid portion of a golf club head may be enclosed with a boundary that takes any shape, and the golf club head may be sintered to transition the volume enclosed by the boundary into solid material. As described herein, additive manufacturing provides several design, manufacturing, and performance benefits for golf club heads. Additive manufacturing also provides several advantages to the development or prototyping of golf club heads. For example, an entire set of iron-type golf club heads may be printed within a single build platform (e.g., a powdered metal bed used in binder jetting, DMLS, SLM, etc.). As such, an entire set of iron-type golf club heads may be printed and tested in a single build job, which differs, for example, from a forging process where the golf club heads are formed one at a time. Alternatively or additionally, multiple iterations of a golf club head design may be printed and tested during a single build job. As described herein, in some embodiments according to the present disclosure, at least a portion of a golf club head may be manufactured via an additive manufacturing process. In some embodiments, a golf club head may be at least partially manufactured, or at least partially formed via a mold that is manufactured, via an additive manufacturing process. For example, a face insert that defines a striking face or a front face of a golf club head may be designed to include a 3-D structure that improves performance. In some embodiments, a rear side of a front face on a golf club head may include a lattice structure or a ribbed structure. For example, as illustrated inFIG.60, a face insert800defines a front face or striking face of a wood-type golf club head and may include a lattice structure802arranged on a rear side of the front face or striking face. In the illustrated embodiment, the lattice structure802includes generally triangularly-shaped unit cells that vary in density, surface area, or volume along the rear side of the face insert800. In some embodiments, the lattice structure802may define any size or shape according to the lattice structures described herein. In any case, the lattice structure802may vary in one or more of unit cell type, unit cell geometry, unit cell size, segment length, segment, thickness, segment volume, and unit cell density at one or more locations along the rear side of the face insert800. In some embodiments, the variability in the lattice structure802along the rear side of the face insert800may provide improved performance, when compared to a lattice structure with constant properties. In some embodiments, the incorporation of a lattice structure into a striking face on a wood-type golf club head may enable the striking face to define a reduced thickness, for example, when compared to a striking face fabricated solely from a solid material, due to the stiffness provided by the lattice structure. That is, the incorporation of a lattice structure, or a ribbed structure (seeFIG.61), on a striking face of a wood-type golf club head may provide added stiffness, which enables a thickness of the solid portion (i.e., a thickness defined by the portion of the striking face that does not include an added 3-D structure) to define a reduced thickness when compared to a striking face fabricated solely from solid material. Turning toFIG.61, in some embodiments, a face insert810may include of a wood-type golf club head may include a ribbed structure812arranged on a rear side of a striking face. In the illustrated embodiment, the ribbed structure812may include a solid portion814that defines a generally solid protrusion that protrudes from the rear side of the face insert810and a plurality of ribbed segments816that extend along the rear side of the striking face (e.g., generally in a sole-topline direction or a vertical direction from the perspective ofFIG.61). The plurality of ribbed segments816may be spaced laterally (e.g., in a left-right direction from the perspective ofFIG.61) along the rear side of the face insert810. In some embodiments, the 3-D structures incorporated onto the striking faces of wood-type golf club heads may be difficult to manufacture using conventional manufacturing processes. Additive manufacturing processes may be leveraged to enable efficient and accurate manufacturing of these striking faces of wood-type golf club heads. For example,FIG.62illustrates a face insert820that is based on the face insert810and manufactured via an additive manufacturing process and may be used in a casting process. In some embodiments, the face insert820may be manufacturing out of an investment casting material (e.g., wax) and may be manufactured via an additive manufacturing process. In some embodiments, conventional, non-additive manufacturing processes may not be able to create the 3-D structure arranged on the rear side of the striking faces described herein, for example, do to the presence of a lattice structure, an undercut, or a gap. Additive manufacturing may be leveraged to efficiently and accurately manufacture a face insert820. Once the face insert820is manufactured via an additive manufacturing process, the face insert820may be used to create a casting mold, or another mold (e.g., metal injection molding mold), of a striking face of a wood-type golf club head by shelling the mold with a slurry to form a shell. Once the shell has formed, the investment casting material (e.g., wax) may be burned out and metal may be poured into the cavity defined by the shell to form a casting of the face insert. As illustrated inFIG.63, the casting mold or other type of mold may be used to manufacture the striking face of a wood-type golf club head with an accurate representation of the desired 3-D structure arranged on the rear side of the striking face. The manufactured striking face illustrated inFIG.63may then be post-processed to conform to factory finish standard and may be attached to a club head body. In some embodiments, the additive manufacturing of a mold, or a structure that is used to make a mold in an investment casting process, may be used to manufacture iron-type golf club heads. For example,FIG.64illustrates a club head mold830for an iron-type golf club head that may be manufacture via an additive manufacturing process. In some embodiments, the club head830may be manufactured out of an investment casting material (e.g., wax). The manufacture of the club head830via an additive manufacturing process may enable the creation of unique undercuts and intricate geometries, for example, arranged on a rear surface or rear cavity of an iron-type golf club head, among other locations. For example,FIGS.65and66illustrated a 3-D structure that may be incorporated into the club head830ofFIG.64. In general, the use of a wax pattern mold that is printed via an additive manufacturing process may increase efficiency, decrease costs, and enable the creation of more complex club head geometries, when compared to convention manufacturing processes. For example, creating a wax pattern mold via an additive manufacturing process does not require tooling when creating a design of the mold. A 3-D model of the mold may be created in 3-D printing software, where conventional investment casting mold requires the creation of a wax tool. Once the part is designed in 3-D printing software, the wax pattern mold may be printed via an additive manufacturing process with casting gates, while conventional investment castings require wax to be injected into the wax tool. As described herein, in some embodiments, a golf club head may be required to be sintered after manufacture via an additive manufacturing process. In these embodiments, a support structure or fixture may be required to aid in maintaining orientation and shape of the green part during sintering.FIGS.67and68illustrated one embodiment of a sintering support900that may be used to support a golf club head during sintering. In the illustrated embodiment, the sintering support900may be used to support an iron-type golf club head during sintering. The sintering support900may include a face surface902, a hosel surface904, and a support wall906. The hosel surface904may extend from one side of the face surface902at an angle that is defined by a lie angle of the golf club head. The support wall906may extend generally perpendicularly from a side of the face surface902that is opposite to the hosel surface904. In the illustrated embodiment, the face surface902may be arranged generally parallel to a sintering plane S that the sintering support900rests on during sintering. In this way, for example, when a golf club head is arranged on the sintering support900, the face surface902orients a front face or striking face of a golf club head generally parallel to the build plane and provides support to the front face or striking face. The arrangement and support of the front face or striking face provided by the sintering support900aids in reducing or preventing warping of the golf club head geometry during sintering. In addition, the angle between the face surface902and the hosel surface904being equal to a lie defined by the golf club head further aids in reducing or preventing warping of the golf club head geometry during sintering. Referring toFIGS.69and70, in some embodiments, the sintering support900may angle the golf club head supported thereon, such that a hosel of the golf club head is arranged generally perpendicular to the sintering plane S or generally parallel to a direction of gravity. In this way, for example, the sintering support900may further aid in preventing warping of the club head geometry during the sintering process. In the illustrated embodiment, a bottom edge908of the support wall906may be arranged at an angle relative to the face surface902. When the bottom edge908of the support wall906is placed on the sintering plane P, the angle between the bottom edge908of the support wall906and the face surface902may arrange a hosel910of a golf club head912in a direction that is generally perpendicular to the sintering plane P or generally parallel to a direction of gravity. In this orientation, the face surface902may be angled relative to the sintering plane S. The orientation of the hosel910in a direction that is generally perpendicular to a direction of gravity may prevent movement of the hosel910during sintering, which maintains the lie and loft defined by the golf club head912pre-sintering. In some embodiments, the sintering support900may be fabricated via an additive manufacturing process. For example, the face surface902, the hosel surface904, and the support wall906may be formed layer by layer by an additive manufacturing process. Any of the embodiments described herein may be modified to include any of the structures or methodologies disclosed in connection with different embodiments. Further, the present disclosure is not limited to club heads of the type specifically shown. Still further, aspects of the club heads of any of the embodiments disclosed herein may be modified to work with a variety of golf clubs. As noted previously, it will be appreciated by those skilled in the art that while the disclosure has been described above in connection with particular embodiments and examples, the disclosure is not necessarily so limited, and that numerous other embodiments, examples, uses, modifications and departures from the embodiments, examples and uses are intended to be encompassed by the claims attached hereto. Various features and advantages of the invention are set forth in the following claims. INDUSTRIAL APPLICABILITY Numerous modifications to the present disclosure will be apparent to those skilled in the art in view of the foregoing description. Accordingly, this description is to be construed as illustrative only and is presented for the purpose of enabling those skilled in the art to make and use the invention. The exclusive rights to all modifications which come within the scope of the appended claims are reserved.
131,097
11857849
DETAILED DESCRIPTION OF EMBODIMENTS Shown inFIGS.1A and1Bis a golf club head100, which may be bounded by a toe102, a heel104opposite the toe102, a top line106, and a sole108opposite the top line106. The club head100may include, adjacent to the toe102, a toe region110, and adjacent to the heel104, it may further possess a heel region112. A hosel120for securing the club head100to an associated shaft (not shown) may extend from the heel region112, and the hosel120may in turn define a virtual central hosel axis122. The club head100may further include a striking face130at a front portion thereof and a rear face138opposite to the striking face130. The striking face130is the substantially planar exterior surface part of the front portion that generally conforms to a virtual striking face plane132and that is arranged to contact a golf ball at a factory-designated loft angle134taken between the striking face plane132and the central hosel axis122. The striking face130may include a face center136that is equidistant between the uppermost point137of the striking face130and the lowermost point139of the striking face130as well as equidistant between the heelward-most point of the striking face130and the toeward-most point of the striking face130. Additionally, the striking face130may be formed with surface features that increase traction between the striking face130and a struck golf ball to ensure both good contact with the ball (for example, in wet conditions) and impart a degree of spin to the ball, e.g., for stability in flight or to better control a struck golf ball once it has returned to the ground by way of backspin. Included in these surface features may be a grid of substantially parallel horizontal grooves or scorelines150as well as other surface features that form a texture pattern and will be shown and described in detail below. The golf club head100is shown inFIGS.1A and1Bas being in the “reference position.” As used herein, “reference position” denotes a position of a golf club head, e.g., the club head100, in which the sole108of the club head100contacts a virtual ground plane140such that the hosel axis122of the hosel120lies in a virtual vertical hosel plane124and the scorelines150are oriented horizontally relative to the ground plane140. Unless otherwise specified, all club head dimensions described herein are taken with the club head100in the reference position. As the golfer nears the pin, precision in golf shots provided by, e.g., improved contact with the ball or increased backspin, generally becomes more critical than other considerations such as distance. The golf club head100that includes the above-mentioned surface features that increase traction is therefore preferably of an iron or a wedge type, although it could be a putter-type club head. In particular, the loft angle134may be at least 15 degrees and preferably between 23 and 64 degrees. Even more preferably, the loft angle134may be between 40 and 62 degrees, and yet even more preferably, this loft angle134may be between 46 and 62 degrees. The golf club head100may preferably be formed of a metal, e.g., titanium, steel, stainless steel, or alloys thereof. More preferably, the main body of the club head100may be formed of 431 stainless steel or 8620 stainless steel. The main body of the club head100may be integrally or unitarily formed, or the main body may be formed of plural components that are welded, co-molded, brazed, or adhesively secured together or otherwise permanently associated with each other, as is understood by one of ordinary skill in the art. For example, the golf club head100may be formed of a main body of a first material and of a striking wall (including the striking face130) of a second material different from the first and welded to the main body. The mass of the club head100may preferably be between 200 g and 400 g. Even more preferably, the mass of the golf club head100may be between 250 g and 350 g, and yet even more preferably, it may be between 275 g and 325 g. FIGS.2A-2Cshow enlarged views of a portion of the golf club head100, and particularly of the striking face130. As mentioned previously, the striking face130may include as surface features a plurality of substantially horizontal scorelines150. These scorelines150are typically formed by mechanical milling, e.g., spin-milling, but they may alternatively be formed by stamping, casting, electroforming, or any other suitable known method. First and second virtual planes152and154(shown inFIG.2B), which are perpendicular to the striking face plane132and which are respectively defined by the toeward-most extent and the heelward-most extent of the scorelines150, delimit a scoreline region114of the striking face130. The scoreline region114may also be referred to herein as a central region of the striking face130. The first virtual plane152also delimits the heelward-most boundary of the toe region110, and the second virtual plane154delimits the toeward-most boundary of the heel region112. The scorelines150may be designed to be in compliance with USGA regulations. These scorelines150may therefore preferably have an average width between 0.6 mm and 0.9 mm, more preferably between 0.65 mm and 0.8 mm, and even more preferably between 0.68 mm and 0.75 mm. For all purposes herein, and as would be understood by those of ordinary skill in the art, scoreline width is determined using the “30 degree method of measurement,” as described in Appendix II of the current USGA Rules of Golf (hereinafter “Rules of Golf”). The scorelines150may have an average depth, measured according to the Rules of Golf, of no less than 0.10 mm, preferably between 0.25 mm and 0.60 mm, more preferably between 0.30 mm and 0.55 mm, and most preferably between 0.36 mm and 0.44 mm. To further comply with USGA regulations, the draft angle of the scorelines150as that term would be construed by one of ordinary skill may be between 0 and 25 degrees, more preferably between 10 and 20 degrees, and most preferably between 13 and 19 degrees. And the groove edge effective radius of the scorelines150, as outlined in the Rules of Golf, may be between 0.150 mm and 0.30 mm, more preferably between 0.150 mm and 0.25 mm, and most preferably between 0.150 mm and 0.23 mm. Ultimately, the scoreline150dimensions may be calculated such that: A/W+S≤0.0030 in2, where A is the cross-sectional area of the scorelines150, W is their width, and S is the distance between edges of adjacent scorelines, as outlined in the Rules of Golf. With further reference toFIGS.2A-2C, the striking face130may have formed therein additional surface features in the form of texture patterns constituted by very narrow, relatively shallow grooves, which may be called “micro-grooves.” A first plurality of these micro-grooves160, which may be formed by precision mechanical milling, e.g., CNC milling, may be located in the scoreline region114and are advantageously formed as a pattern of substantially parallel, arcuate lines intersecting the scorelines150. The texture pattern constituted by the micro-grooves160preferably covers most, i.e., the majority, if not all, of the scoreline region114of the striking face130. A second plurality of these micro-grooves170, which are also advantageously formed as a pattern of substantially parallel, arcuate lines, may be located in the toe region110. The texture pattern constituted by the micro-grooves170preferably covers most, if not all, of the toe region110of the striking face130. FIGS.3A and3Bshow a cross-section taken through the plane3A-3A shown inFIG.2A, which intersects the scoreline region114. The plane3A-3A intersects not only the scorelines150but also the first plurality of micro-grooves160. The micro-grooves160may preferably have an average depth D1 (shown inFIG.3B) taken from the striking face130of no greater than 1100 μin, more preferably between 400 μin and 1100 μin, and most preferably between 600 μin and 1100 μin. The pitch P1 of these micro-grooves160, i.e., the distance between centers of adjacent micro-grooves160taken in their direction of propagation, may preferably be between 0.01 in and 0.04 in, more preferably between 0.0175 in and 0.0325 in, and most preferably between 0.025 in and 0.03 in. As will be understood by those of ordinary skill in the art, the average depth D1 and pitch P1 of the micro-grooves160will have a significant impact on the roughness characteristics of the scoreline region114. In particular, to ensure compliance with USGA regulations, the combination of the scorelines150and the texture pattern constituted by the micro-grooves160may imbue the scoreline region114with an average surface roughness Ra1 of preferably less than or equal to 180 μin. More preferably, the average surface roughness Ra1 may be between 40 μin and 180 μin, even more preferably between 100 μin and 180 μin, and it may most preferably be between 120 μin and 180 μin. And the average maximum profile height Rz1 of the scoreline region114may preferably be less than or equal to 1000 μin. More preferably, the average maximum profile height Rz1 may be between 300 μin and 1000 μin, even more preferably between 500 μin and 800 μin, and it may most preferably be between 600 μin and 700 μin. FIGS.4A and4Bin turn show a cross-section taken through the plane4A-4A shown inFIG.2A, which intersects the toe region110. The plane4A-4A intersects the second plurality of micro-grooves170. The micro-grooves170may preferably have an average depth D2 (shown inFIG.4B) taken from the striking face130of no less than 800 μin, more preferably between 1000 μin and 2000 μin, even more preferably between 1000 μin and 1800 μin, and most preferably between 1300 μin and 1600 μin. The pitch P2 of these micro-grooves170, i.e., the distance between centers of adjacent micro-grooves170taken in their direction of propagation, may preferably be between 0.03 in and 0.06 in, more preferably between 0.035 in and 0.055 in, and most preferably between 0.04 in and 0.05 in. The depth D2 and the pitch P2 of the micro-grooves170may thus exceed the depth D1 and the pitch P2 of the micro-grooves160. Similar to the micro-grooves160, the average depth D2 and pitch P2 of the micro-grooves170will have a significant impact on the roughness characteristics of the toe region110. In particular, the texture pattern constituted by the micro-grooves170may preferably imbue most, i.e., the majority, if not all, of the toe region110with an average surface roughness Ra2 of preferably greater than or equal to 270 μin. More preferably, the average surface roughness Ra2 may be greater than or equal to 300 μin, and even more preferably, it may be greater than or equal to 350 μin. In comparison to Ra1 of the scoreline region114, Ra2 of the toe region110may preferably be greater than or equal to 1.5×Ra1, more preferably greater than or equal to 2×Ra1, and most preferably, Ra2 may be greater than or equal to 3×Ra1. Although at least a majority of the toe region110may have the average surface roughness Ra2, more preferably 80% of the toe region110may have the average surface roughness Ra2, and even more preferably 95% of the toe region110may have the average surface roughness Ra2. The average maximum profile height Rz2 of the toe region110may preferably be greater than or equal to 1000 μin. More preferably, the average maximum profile height Rz2 may be between 1000 μin and 2000 μin, even more preferably between 1200 μin and 1800 μin, and it may most preferably be between 1400 μin and 1600 μin. FIG.2Chighlights certain portions of the striking face130by way of a virtual circular central region115, which may be within the scoreline region114, and a virtual circular periphery region111, which may be within the toe region110. Central region115may be centered at the face center136, and it may have a radius of no less than 10 mm. The central region115may also possess the average roughness Ra1, and its average surface roughness may thus be no greater than 180 μin. Periphery region111, like the central region115, may have a radius of no less than 10 mm. This periphery region111may possess the average roughness Ra2, and its average surface roughness may thus be no less than 270 μin. Referring toFIG.5, exemplary processes for forming the striking face130of the golf club head100by milling are shown.FIGS.6A through6Fillustrate the club head100after performance of certain steps of the processes shown inFIG.5. In each ofFIGS.6A through6F, the club head100is oriented such that the striking face plane132coincides with the plane of the paper. The relative order of the various steps of the processes shown inFIG.5is for purposes of illustration only. One of ordinary skill in the art would appreciate that, unless indicated otherwise, various steps of the processes may be omitted, other steps may be added, or the relative order of such steps may be altered. In a first step200, the body of the golf club head100may be formed. It may be formed by casting. Alternatively, the main body of the club head100may be formed by forging, machining, and/or any other suitable method as known in the art. Once formed, in step202, the club head body may optionally undergo a heat treatment process, whereby the club head body is case-hardened. Alternatively, or in addition, the body of the golf club head100may be cold-worked or otherwise forged to more advantageously tailor the body's material properties. Next, in step204, the body of the golf club head100may optionally be polished by way of sandblasting (or another media blasting process). This step204helps to remove any burrs or flashing that may have resulted from the club head formation step200. In addition, the sandblasting process provides a foundation for an aesthetically pleasing final product. Once polished, in step206, the body of the golf club head100may undergo a preliminary milling operation particularly directed at the striking face130. The preliminary milling operation may preferably be carried out using a machine bit, feed rate, and spin rate such that a resulting roughness value Ra is relatively low, e.g., an Ra value less than 40 μm. This process may be carried out as to preferably not result in any visually discernible ridges by, e.g., operating this process at a feed rate that is sufficiently high and/or a spin rate that is sufficiently low to generate this effect. In this manner, subsequent texture-enhancing processes may effect a final striking face130having metrological properties closer to target and more consistent from sample to sample. The body of the golf club head100may be referred to at this time as an intermediate golf club head body. After the preliminary milling operation of step206, the striking face130of the intermediate golf club head body may be milled under a different set of machining parameters in a first groove milling pass to provide a milled surface having different visual and tactile characteristics. In particular, the first groove milling pass may create the extreme roughness Ra2 across at least the toe region110.FIG.6A, for example, shows the striking face130after one possible first groove milling pass208A. The micro-grooves formed by this pass208A cover the entire toe region110and even extend into the scoreline region114, thereby imbuing these milled areas with the roughness Ra2. An alternative first groove milling pass is shown inFIG.6D. The micro-grooves formed by this pass208B preferably cover the majority of the striking face130, and they thus create the extreme roughness Ra2 across more of the striking face130than the first groove milling pass208A. AlthoughFIG.6Dshows the micro-grooves formed by the milling pass208B as covering the toe region110and the scoreline region114, the extreme roughness may also be carried into the heel region112. A second groove milling pass with yet a different set of machining parameters may then be performed on the striking face130. Whereas the first groove milling pass created the extreme roughness Ra2, this second groove milling pass endeavors to lower the average roughness in at least the scoreline region114to comply with USGA regulations, thereby preferably leaving only the toe region110with the extreme roughness Ra2. The second groove milling pass may thus create the scoreline region114that is distinct from the toe region110. FIG.6Bshows the impact of a second groove milling pass210A that may be performed on the golf club head100shown inFIG.6A. This pass210A may be limited to the scoreline region114, and the heel region112in some implementations. As a result, the striking face130of this club head100is left with a toe region110with an extreme roughness Ra2 and a scoreline region114, a majority of which possesses average roughness closer to or at Ra1. Also formed within the scoreline region114, however, is an overlap region116. This overlap region116was subjected to both the first and second groove milling passes208A,210A, and as a result, has a visual appearance different from that of the non-overlap regions of the striking face130but preferably still possesses Ra values closer to Ra1 at least within the scoreline region114. This visual appearance difference is created by the grooves from the second milling pass210A being superimposed onto the grooves formed by the first milling pass208A. FIG.6Ein turn shows the impact of a second groove milling pass210B that may be performed on the golf club head100shown inFIG.6D. This pass210B, like the pass210A, may cover the entire scoreline region114(and possibly the heel region112), thereby reducing the average roughness of the scoreline region114from the extreme roughness Ra2 imparted by the first groove milling pass208B. Unlike the golf club head shown inFIG.6B, the golf club head100shown inFIG.6E, which is formed by the passes208B and210B, lacks the overlap region116due to the second groove milling pass210B removing the material of the grooves formed by the first groove milling pass described in step208B. As such, in some implementations, only the micro-grooves formed by the second pass210B may remain in the scoreline region114. In some implementations, the second groove milling pass210B may remove the material of the grooves formed by the first groove milling pass described in step208B as well as additional material of the club head100to form a visually discernible step between the higher grooves of the first groove milling pass and the lower grooves of the second groove milling pass. Next, the scorelines150may be formed on the striking face130, thereby creating a club head body configuration as shown inFIGS.6C and6F. The score lines150may be integrally cast into the main body as a whole. Alternatively, the scorelines150may be stamped. However, the scorelines150may preferably be formed by milling, optionally spin-milling. This method is advantageous in its precision. Although it may occur prior to these operations, the formation of the scorelines150preferably occurs subsequent to the first and second groove milling passes. In this manner, greater consistency in roughness may be achieved as the milling bit may be applied with even pressure throughout. Further, the scorelines150may be formed with greater precision and more sharply-defined edges. Optionally, after the scorelines150are formed, the golf club head100, or just the striking face130, may be plated or coated with a metallic layer, or treated chemically or thermally in a finishing step214. Such treatments are well-known, and they may enhance the aesthetic qualities of the club head and/or one or more utilitarian aspects of the club head, e.g., durability or rust-resistance. For example, the golf club head100may be nickel-plated and optionally subsequently chrome-plated. Such plating enhances the rust-resistance characteristics of the club head100. Further, such plating improves the aesthetic quality of the club head100, and it may serve as a substrate for any future laser etching process. Plating selection is also believed to have an effect on the visual and/or textural characteristics of subsequently-formed laser-etched regions superimposed thereon. Optionally, subsequent to the nickel- and chrome-plating, the striking face130may undergo a physical vapor deposition (“PVD” hereinafter) process. Preferably, the PVD operation results in a layer that comprises either a pure metal or a metal/non-metal compound. Preferably, the PVD-formed layer comprises a metal comprising at least one of: vanadium, chromium, zirconium, titanium, niobium, molybdenum, hafnium, tantalum, and tungsten. More preferably, the PVD-applied layer is characterized as a nitride, a carbide, an oxide, or a carbonitride. For example, a layer of any of zirconium nitride, chromium nitride, and titanium carbide may be applied, depending on the desired visual effect, e.g., color and/or material properties. Preferably, the PVD operation results in a layer of titanium carbide. This process enhances the aesthetic quality of the golf club head100, while also increasing the durability of the striking face130. Next, a laser etching step216may be performed. The laser etching operation216may preferably be carried out after the scoreline forming process212A,212B, in part so that the scorelines150provide a basis for properly and efficiently aligning the feed direction of the laser. However, the laser etching operation may alternatively be performed before or after the first and second groove milling passes. It is conceived that the second groove milling passes210A,210B may be insufficient to bring the average surface roughness Ra of the scoreline region114into a range compliant with USGA requirements, e.g., Ra1. For example, the second passes210A,210B may actually bring the average roughness of this region114to about 200 μin. The above-described finishing step214in combination with the laser etching step216may then be used to bring the average surface roughness Ra of the scoreline region114down into the permissible ranges encompassed by Ra1. Additional other steps may also be performed. For example, an additional sandblasting operation may be carried out immediately after the second groove milling passes210A and210B. Additional sandblasting may be performed for a variety of reasons, such as providing a particular aesthetic appearance, and deburring and cleaning the striking face after the milling steps are performed. Described above are thus a golf club head100and methods of its manufacture. The golf club head100with an extremely rough toe region110possesses numerous advantages over prior club heads, while nonetheless complying with USGA regulations regarding average surface roughness Ra and average maximum profile height Rz. For example, the visual perception of this increased roughness at toe region110indicates to the golfer that the remainder of the striking face130is similarly roughened and thereby capable of generating more spin on the golf ball, which inspires confidence in the golfer. Further, when in the vicinity of the green, experienced golfers often intentionally strike the golf ball on the toe of the club head as part of, e.g., open face chip shots. The extremely rough toe region110of the golf club head100enables the golfer to impart more spin on the struck golf ball during such shots. For a shot mishit off the toe region110, e.g., a “skulled shot,” that often has higher velocity and lower trajectory than desired, the increased surface roughness of the toe region110may increase the struck golf ball's back spin, thereby reducing the velocity of the mishit shot. And further still, the directionality of the micro-grooves170constituting the surface texture of the toe region110is easily noticeable at address. As a result, it is easier for the golfer to align the golf club100before a shot, and the golfer's confidence in the direction of the shot is correspondingly increased. Also envisioned are a golf club head300and a golf club head400, shown in the reference position inFIGS.7and10, respectively. Like the golf club head100, the club head300may include a toe302, a heel304opposite the toe302, a top line306, and a sole308opposite the top line306. The golf club head300may include, adjacent to the toe302, a toe region310, and adjacent to the heel304, it may further possess a heel region312. A hosel320for securing the golf club head300to an associated shaft (not shown) may extend from the heel region312, and the hosel320may in turn define a virtual central hosel axis322. The golf club head300may further include a striking face330at a front portion thereof and a rear face (also not shown) opposite to the striking face330. Similarly, the golf club head400may include a toe402, a heel404opposite the toe402, a top line406, and a sole408opposite the top line406. The club head400may include, adjacent to the toe402, a toe region410, and adjacent to the heel404, it may further possess a heel region412. A hosel420for securing the golf club head400to an associated shaft (not shown) may extend from the heel region412, and the hosel420may in turn define a virtual central hosel axis422. The golf club head400may further include a striking face430at a front portion thereof and a rear face (also not shown) opposite to the striking face430. The golf club heads300and400may be formed of the same materials as the golf club head100, and they may each have a similar mass. That is, the mass of each of the club heads300and400may preferably be between 200 and 400 g. Even more preferably, the mass of each of the club heads300and400may be between 250 g and 350 g, and yet even more preferably, it may be between 275 g and 325 g. The golf club heads300and400may preferably be of an iron or a wedge type, although they could be a putter-type club head. In particular, the loft angle of each of the club heads300and400may be greater than 15 degrees and preferably be between 23 and 64 degrees. Even more preferably, the loft angle may be between 40 and 62 degrees, and yet even more preferably, this loft angle may be between 46 and 60 degrees. Scorelines350and450may be formed in the striking faces330and430, respectively. The scorelines350and450may be formed in the same manner and have the same dimensions as the scorelines150, and they may thus be designed to be in compliance with USGA regulations. More specifically, these scorelines350and450may preferably have an average width between 0.6 mm and 0.9 mm, more preferably between 0.65 mm and 0.8 mm, and even more preferably between 0.68 mm and 0.75 mm. The scorelines350and450may also have an average depth from the generally planar surface of their respective striking faces of no less than 0.10 mm, preferably between 0.25 mm and 0.60 mm, more preferably between 0.30 mm and 0.55 mm, and most preferably between 0.36 mm and 0.44 mm. The draft angle of the scorelines350and450may be between 0 and 25 degrees, more preferably between 10 and 20 degrees, and most preferably between 13 and 19 degrees. And to further comply with USGA regulations, the groove edge effective radius of the scorelines350and450may be between 0.150 mm and 0.30 mm, more preferably between 0.150 mm and 0.25 mm, and most preferably between 0.150 mm and 0.23 mm. Similar to that described with respect to the golf club head100above, the scorelines350and450are also designed to have a ratio W/(A+S) of less than 0.0030 in2. As would be understood by one of ordinary skill, all of the above dimensions are determined in accordance with the previously-discussed Rules of Golf. Also like the golf club head100, micro-grooves360and460preferably formed by precision mechanical milling, e.g., CNC milling, may be respectively formed in the striking faces330and430as a pattern of substantially parallel arcuate lines. The micro-grooves360and460may have an average depth taken from the corresponding striking face of no greater than 1100 μin, more preferably between 400 μin and 1100 μin, and most preferably between 600 μin and 1100 μin. The pitch of these micro-grooves360and460, i.e., the distance between centers of adjacent micro-grooves taken in their direction of propagation, is discussed in detail below. As will be understood by those of ordinary skill in the art, the average depth and pitch of the micro-grooves360and460will have a significant impact on the roughness characteristics of the striking faces330and430. In particular, to ensure compliance with USGA regulations, the striking faces330and430may each possess an average surface roughness Ra of preferably less than or equal to 180 μin. More preferably, the average surface roughness Ra may be between 40 μin and 180 μin, even more preferably between 60 μin and 180 μin, and most preferably between 110 μin and 180 μin. And the average maximum profile height Rz of the striking faces330and430may preferably be less than or equal to 1000 μin. More preferably, the average maximum profile height Rz may be between 200 μin and 1000 μin, even more preferably between 400 μin and 900 μin, and most preferably between 500 μin and 800 μin. A method for forming the micro-grooves360of the golf club head300by milling is shown inFIG.8. The club head300may have been previously subjected to various casting, heat treatment, polishing, and preliminary milling operations such as those described in steps200,202,204, and206above. In a first step370, the body of the golf club head300may be placed in a milling position where the hosel axis322is perpendicular to the ground plain. The golf club head300may then be subjected to a first milling pass372, in which the milling tool follows the vertical path373(shown inFIG.7) as it moves across the striking face330from the sole308to the top line306. During this first milling pass372, the milling tool is set at an angle with respect to the plane of the striking face330sufficient to ensure that the milling tool interacts with the striking face330only to create the top half of its circle circumference and thus misses the striking face330at the bottom half of the circle circumference. In this manner, the milling tool creates a rotex pattern constituted by some of the arcuate micro-grooves360shown inFIG.7. The pitch of the micro-grooves360formed by this first pass372, i.e, the distance between centers of adjacent ones of these micro-grooves360taken in their direction of propagation, may preferably be between 0.01 in and 0.04 in, more preferably between 0.0175 in and 0.0325 in, and even more preferably between 0.025 and 0.03 in. Thereafter, the golf club head300is subjected to a second milling pass374, in which the milling tool follows the vertical path375(shown inFIG.7) as it moves across the striking face330from the sole308to the top line306. The texture pattern created by the first and second milling passes372and374creates an interference pattern on the striking face330that is composed of smaller diamond shapes. Relative to the vertical path375, the path373of the first milling pass372may be offset toward the toe302between 3 mm and 6 mm, more preferably between 4.5 mm and 5.5 mm, and most preferably by 5 mm. This offset may be visually evident approximate the heel region312, at which there is a noticeable break in the texture pattern of the striking face330that corresponds to the offset of the milling tool. As in the first milling pass372, the milling tool is set at a sufficient angle with respect to the plane of the striking face330during the second milling pass374, thereby creating another rotex pattern constituted by the remainder of the micro-grooves360shown inFIG.7. Also like the first milling pass, the pitch of the micro-grooves360formed by this second pass374, i.e, the distance between centers of adjacent ones of these micro-grooves360taken in their direction of propagation, may preferably be between 0.01 in and 0.04 in, more preferably between 0.0175 in and 0.0325 in, and even more preferably between 0.025 and 0.03 in. After the first and second milling passes372and374, the golf club head300may then be subjected to various additional processes such as the scoreline formation, optional treatment, and laser etching steps previously described in connection with steps212,214, and216.FIG.9Aillustrates a magnified portion of the striking face330shown inFIG.7.FIG.9Bshows a cross-section of the finished striking face330taken along the plane9B-9B inFIG.9A. Because of the sequential first and second milling passes372and374that are offset from one another, the distance between adjacent peaks of the micro-grooves360varies along the striking face330from the top tine306to the sole308. A method for forming the micro-grooves460of the golf club head400by milling is shown inFIG.11. The club head400may have been previously subjected to various casting, heat treatment, polishing, and preliminary milling operations such as those described in steps200,202,204, and206above. As with the golf club head300, in a first step470, the body of the club head400is placed in a milling position where the hosel axis422is perpendicular to the ground plain. The club head400is then subjected to a first milling pass472, in which the milling tool follows the vertical path473as it moves across the striking face430from the sole408to the top line406. During this first milling pass472, the milling tool is set at an angle with respect to the plane of the striking face430sufficient to ensure that the milling tool interacts with the striking face430only to create the top half of its circle circumference and thus misses the striking face430at the bottom half of the circle circumference. In this manner, the milling tool creates a rotex pattern constituted by some of the micro-grooves460shown inFIG.10. Like the step372, the pitch of the micro-grooves460formed by this first pass472, i.e, the distance between centers of adjacent ones of these micro-grooves460taken in their direction of propagation, may preferably be between 0.01 in and 0.04 in, more preferably between 0.0175 in and 0.0325 in, and even more preferably between 0.025 and 0.03 in. Thereafter, the club head400is subjected to a second milling pass474, in which the milling tool follows the vertical path475as it moves across the striking face430from the sole408to the top line406. The texture pattern created by the first and second milling passes472and474creates an interference pattern on the striking face430that is composed of larger diamond shapes. Relative to the vertical path475, the path473of the first milling pass472may be offset toward the toe402between 1 mm and 3 mm, more preferably between 1.5 mm and 2.5 mm, and most preferably by 2 mm. This offset may be visually evident approximate the heel region412, at which there is a noticeable break in the texture pattern of the striking face430that corresponds to the offset of the milling tool. As in the first milling pass472, the milling tool is set at an angle with respect to the plane of the striking face430during the second milling pass, thereby creating another rotex pattern constituted by the remainder of the micro-grooves460shown inFIG.10. Also like the first milling pass472, the pitch of the micro-grooves460formed by this second pass474, i.e, the distance between centers of adjacent ones of these micro-grooves460taken in their direction of propagation, may preferably be between 0.01 in and 0.04 in, more preferably between 0.0175 in and 0.0325 in, and even more preferably between 0.025 and 0.03 in. After the first and second milling passes472and474, the golf club head400may be subjected to various additional processes such as the scoreline formation, optional treatment, and laser etching steps previously described in connection with steps212,214, and216.FIG.12Aillustrates a magnified portion of the striking face430shown inFIG.10. FIG.12B shows a cross-section of the finished striking surface430taken along the plane12B-12B inFIG.10. Because of the sequential first and second milling passes472and474that are offset from one another, the distance between adjacent peaks of the micro-grooves460varies along the striking face430from the top line406to the sole408. The respective combinations of the first milling passes372,472with the second milling passes374,474thus create interference patterns on the striking faces330and430that are constituted by diamonds. The diamonds are created by the grooves from the second milling passes374,474being superimposed over the grooves from the first milling passes372,472, respectively. These interference patterns each create more consistent roughness across the corresponding striking face, including having peak roughness at locations on the face where impact is most common, e.g., along the vertical centerline of the striking face. For example, as shown inFIG.14, average maximum profile height Rz peaks for both the striking face330, i.e., 5 mm offset, and the striking face430, i.e., 2 mm offset, around the center of the striking face. The interference patterns described above also create more spin from the rough and in wet conditions, as is evidenced by the increase in average maximum profile height Rz for the striking faces330and430compared to a striking face with no offset. As mentioned previously, the interference pattern on the striking face330is constituted by smaller diamonds. When the golf club head300is in the closed, or normal position at address, the directionality of this interference pattern faces thus toward the target. This is particularly advantageous in the context of lower-lofted clubs, i.e., clubs with a loft angle of 52 degrees and below, which often face the golf ball at address with the club head in this closed, or normal position. The club head300may thus be such a lower-lofted club head. The interference pattern on the striking face430is constituted by larger diamonds, however. Higher lofted clubs, i.e., those with a loft angle of 54 degrees and greater, often face the golf ball at address with the club face in an open position. In prior art golf clubs, this open position, which is desired for many sand bunker shots, lob shots, and chip shots, results in the club face appearing offline, e.g., aimed to the right of the target. The directionality of the interference pattern on the striking face430, however, cures this visual issue by creating the appearance that the micro-grooves460are directed toward the target, even though the face is open. The golf club head400may thus be such a higher-lofted club head. In the foregoing discussion, the present invention has been described with reference to specific exemplary aspects thereof. However, it will be evident that various modifications and changes may be made to these exemplary aspects without departing from the broader spirit and scope of the invention. For example, althoughFIG.6Eshows an embodiment in which the micro-grooves from the first milling pass208B are removed in the scoreline region114by the second groove milling pass210B, in some implementations, the grooves from the second groove milling pass210B may be entirely superimposed onto the grooves of the first groove milling pass208B. As a result, both groove patterns may be visually discernible in the scoreline region114while still maintaining Ra1 values in the scoreline region114and Ra2 values in the toe region110, as shown inFIG.13. Accordingly, the foregoing discussion and the accompanying drawings are to be regarded as merely illustrative of the present invention rather than as limiting its scope in any manner.
39,513
11857850
For simplicity and clarity of illustration, the drawing figures illustrate the general manner of construction, and descriptions and details of well-known features and techniques may be omitted to avoid unnecessarily obscuring of the drawings. Additionally, elements in the drawing figures are not necessarily drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help improve understanding of different embodiments. The same reference numerals in different figures denote the same elements. The terms “first,” “second,” “third,” “fourth,” and the like in the description and in the claims, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the golf club attachment mechanism and related methods described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms “include,” and “have,” and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The terms “left,” “right,” “front,” “back,” “top,” “bottom,” “over,” “under,” and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the golf club attachment mechanism and related methods described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein. DESCRIPTION In one embodiment, a golf club head includes: a body having a strike face with channels; and at least one insert located within at least one of the channels. In this embodiment, the channel in which the insert is located has a groove, and the insert forms at least a portion of the groove. Other examples, embodiments, and related methods are further described below. Turning now to the figures,FIG.1depicts a front view of golf club100, according to a first embodiment. Golf club100can be an iron-type golf club head, such as a 1-iron, a 2-iron, a 3-iron, a 4-iron, a 5-iron, a 6-iron, a 7-iron, an 8-iron, a 9-iron, a sand wedge, a lob wedge, a pitching wedge, an n-degree wedge (e.g., 44 degrees (°), 48°, 52°, 56°, 60°, etc.), etc. In a different embodiment, golf club100can also be a wood-type golf club, a hybrid-type golf club, or a putter-type golf club. Golf club100includes golf club head body110and shaft120coupled to golf club head body110. In the illustrated embodiment ofFIG.1, golf club head body110includes hosel114to which shaft120is coupled. In a different embodiment, golf club head body110has a hole, instead of hosel114, to which shaft120is coupled. Golf club head body110includes toe portion115and heel portion116, where hosel114is located at heel portion116. Golf club head body110also includes a perimeter121comprising sole117at a bottom portion of golf club head body110and also comprising top rail118at a top portion of golf club head body110. Golf club head body110can also include notch119at heel portion116. Golf club head body110further includes back face124and front face111opposite back face124. Front face111can also be referred to as a strike face. The strike face can be an integral part of golf club head body110, or the strike face can be a separate piece from, or an insert for, golf club head body110. The strike face includes one or more grooves112, including groove113. Grooves112can extend across the strike face from toe portion115of golf club head body110to heel portion116of golf club head body110. Grooves112can also be stacked vertically above one another from sole117to top rail118, as illustrated inFIG.1. As explained in more detail in the subsequent figures, the strike face includes channels and inserts located within the channels. In one embodiment, the inserts define grooves112, and in a different embodiment, the channels and inserts define different portions of grooves112. FIG.2depicts a cross-sectional view of groove113of golf club head body110ofFIG.1, taken along a section line2-2inFIG.1. As depicted inFIG.2, golf club head body110includes a channel210that is formed in strike face or front face111, and insert220is located within channel210.FIG.3depicts a cross-sectional view of channel210, andFIG.4depicts a cross-sectional view of insert220. The cross-sections ofFIGS.2-4are taken along the widths of groove113, channel210, and insert220. Insert220can extend along the entire length of channel210. Grooves112(FIG.1), including groove113(FIGS.1,2, and4) can be compliant or non-compliant with, for example, the regulations regarding grooves that were adopted by the United States Golf Association (USGA) on Aug. 5, 2008. As an example, when compliant with these regulations, grooves112, including groove113: (1) are straight and parallel with each other; (2) have a symmetrical cross-section and have sidewalls that do not converge toward the groove opening; (3) have a width, spacing, and cross-section that is consistent throughout the impact area of front face111; (4) have a width that does not exceed 0.940 millimeters (mm) using the USGA's thirty degree method of measurement, and where less than half of the widths of grooves112exceed 0.889 mm using the same measurement technique; (5) have a distance between adjacent grooves that is not less than three times the maximum width of the adjacent grooves minus 0.203 mm and that is not less than 1.854 mm, and where less than half of the distances between adjacent ones of grooves112are less than three times the maximum width of the adjacent grooves and are less than 1.905 mm; (6) have a depth that does not exceed 0.559 mm, and where less than half of the depths of grooves112exceed 0.508 mm; (7) have a cross-sectional area divided by a groove pitch (i.e., groove width plus spacing between adjacent grooves) that does not exceed 0.081 mm, and where the less than half of the cross-sectional areas divided by the respective groove pitches exceed 0.076 mm; (8) have a range of widths that do not exceed 0.254 mm; and (9) have a range of depths that do not exceed 0.254 mm. Additional details regarding grooves112are explained in the subsequent figures. In the embodiment illustrated inFIG.2, insert220is located at and below front face111and is not located above front face111. In one example of this embodiment, insert220can be located substantially planar to front face111. Also in this embodiment, insert220can form a portion of front face111, or insert220can be devoid of forming a portion of front face111. Furthermore, in this embodiment, the radius of the edge of the groove can be formed entirely by insert220, or the radius of the edge of the groove can be formed partially by front face111and partially by insert220. In another embodiment, insert220can be located above and below front face111. In a different embodiment, insert220is located only below front face111. In this different embodiment, the radius of the edge of the groove can be formed entirely by front face111, or the radius of the edge of the groove can be formed partially by front face111and partially by insert220. In one embodiment, each of grooves112(FIG.1) is formed by a separate pair of channel210and insert220. In this embodiment, each channel210has a single one of insert220and grooves112(FIG.1), and each insert220forms at least a portion of each of grooves112. In a different embodiment, golf club head body110(FIG.1) has at least one channel, similar to channel210, and at least one insert, similar to insert220, is located within the at least one channel. In this different embodiment, the at least one channel has a single groove, similar to grooves113. In another embodiment, golf club head body110(FIG.1) has channels, similar to channel210; a first insert, similar to insert220, located within a first one of the channels; and a second insert, similar to insert220, located within a second one of the channels. In this embodiment, each of the first and second ones of the channels has a single groove. Also, the first insert forms the single groove in the first one of the channels, and the second insert forms the single groove in the second one of the channels. As illustrated inFIGS.2-4, groove113is symmetrical about axis290, which is an axis that is substantially perpendicular to the widths of groove113, channel210, and insert220. In this embodiment, channel210and insert220are also symmetrical about axis290. In other embodiments, some of which are illustrated in subsequent figures, one or more of channel210and insert220are asymmetrical about axis290while groove113is symmetrical about axis290. Similarly, as illustrated inFIGS.2and4, groove113comprises a first cross section across a width of groove113, and as illustrated inFIGS.2and3, channel210comprises a second cross section across a width of channel210that is non-proportional to the first cross section. In other embodiments, some of which are illustrated in subsequent figures, the second cross section can be proportional to the first cross section. In addition to the first cross section of groove113and the second cross section of channel210, insert220comprises a third cross section across a width of insert220, as illustrated inFIGS.2and4. Each of these three cross sections comprises a cross-sectional area. In the embodiment ofFIGS.2-4, the cross-sectional area of the second cross section of channel210is approximately equal to a sum of the cross-sectional areas of the first and third cross sections of groove113and insert220, respectively. In this embodiment, insert220can be secured within channel210by a friction fit. In some embodiments, an adhesive is disposed within channel210to further secure insert220within channel210. As an example, the adhesive can comprise Loctite® adhesives (from Henkel Corporation in Gulph Mills, Pennsylvania), epoxies, and other types of adhesives. When an adhesive is used within channel210, insert220does not need to have a friction fit with channel210. Accordingly, in some embodiments, no adhesive is used within channel210when insert220is secured within channel210by a friction fit. In other embodiments, where insert220does not have a friction fit with channel210, an adhesive is used within channel210. In a different embodiment illustrated in a subsequent figure, the cross-sectional area of the cross section of the channel (e.g., channel1110inFIGS.11and14) is greater than a sum of the cross-sectional areas of the first and third cross sections of groove113and insert220, respectively. In this different embodiment, a greater amount of adhesive can be used within the channel than for channel210to improve the coupling between the channel and insert220. As illustrated inFIGS.2and4, insert220is a single or unitary piece. In other embodiments, some of which are illustrated in the subsequent figures, the insert can include two or more pieces. In the same or different embodiment, some of the channels in a golf club head body can have inserts while other channels in the golf club head body do not have any inserts. Also, for the channels that do have one or more inserts, some of the channels can have a different number of inserts or a different type or shape of insert than other channels, or each of the channels that have inserts can have the same insert(s). As also illustrated inFIGS.2and4, insert220can form all of groove113. In other embodiments, some of which are illustrated in the subsequent figures, the insert can form part of the groove, and the channel can form a different part of the groove. In some embodiments, insert220can be referred to as a preform because insert220is formed before being inserted into channel210. In the same or different embodiment, groove113can be formed into the preform or insert220before or after forming the preform or insert220and also before or after inserting the preform or insert220into channel210. In some embodiments, insert220can have more than one groove, and different inserts of a golf club head can have a different number of grooves. Regardless of whether the strike face of golf club head body110(FIG.1) is integral with or a separate piece from golf club head body110, the strike face can comprise a first material such as stainless steel, titanium, graphite, a composite of metallic and non-metallic materials, and the like. In some embodiments, insert220also comprises the first material, but in other embodiments, insert220comprises a second material different from the first material. As an example, the second material can be softer than, harder than, or the same hardness as the first material. In one embodiment illustrated in a subsequent figure (e.g.,FIG.27), the second material wears faster than the first material. Examples of the second material include aluminum, a resin, a plastic, titanium, a different grade of stainless steel than used for the first material, a composite, and the like. In one embodiment, the first material comprises a first grade of stainless steel, and the second material comprises a second grade of stainless steel that is more easily machined and/or extruded than the first grade of stainless steel. In another embodiment, the second material comprises a tacky or sticky material. In some embodiments, insert220is not permanent and can be replaceable so that new inserts and/or different inserts can be placed into the channels. In other embodiments, insert220is permanently affixed within channel210. Turning to the next illustrated embodiment,FIG.5depicts a cross-sectional view of channel510of a golf club head, andFIG.6depicts a cross-sectional view of insert620.FIG.7depicts a cross-sectional view of insert620located within channel510. This illustrated embodiment is similar to the embodiment illustrated atFIGS.2-4, but in the embodiment ofFIGS.5-7, channel510and insert620are asymmetric about axis290while groove113is symmetric about axis290. In another illustrated embodiment,FIG.8depicts a cross-sectional view of channel810of a golf club head, andFIG.9depicts a cross-sectional view of insert920.FIG.10depicts a cross-sectional view of insert920located within channel810. This embodiment is similar to the embodiment ofFIGS.2-4, but in the embodiment ofFIGS.8-10, channel810and insert920are asymmetric about axis290while groove113is symmetric about axis290. In a further illustrated embodiment,FIG.11depicts a cross-sectional view of channel1110of a golf club head, andFIG.12depicts a cross-sectional view of insert1220.FIG.13depicts a cross-sectional view of insert1220located within channel1110. This embodiment is also similar to the embodiment ofFIGS.2-4. The embodiments ofFIGS.8-10andFIGS.11-13, however, can more securely hold their respective inserts in their respective channels than the embodiment ofFIGS.2-3because of the configurations of channel810inFIG.8and channel1110inFIG.11. For example, the opening of channel810(FIG.8) at front face111is narrower than a bottom of channel810, and the opening of channel1110(FIG.11) at front face111is also narrower than a bottom of channel1110. Correspondingly, a top of insert920(FIG.9) is narrower than a bottom of insert920, and a top of insert1220(FIG.12) is narrower than a bottom of insert1200. In a different embodiment, insert1220inFIG.12can be replaced with insert3220inFIG.32. Insert3220(FIG.32) is similar to insert1220(FIG.12), except that insert3220comprises two pieces, namely, insert3221and3222(FIG.32). The use of insert3220(FIG.32), instead of insert1220(FIG.12), in channel1110(FIG.11) can facilitate the insertion of the insert into the channel. Next,FIG.14depicts a cross-sectional view of insert210ofFIG.4located within channel1110ofFIG.11, according to another embodiment. This embodiment is also similar to the embodiment ofFIGS.2-4. In the embodiment ofFIG.14, however, the cross-sectional area of channel1110is greater than the sum of the cross-sectional areas of insert220and groove113. In particular, the cross-sectional area of channel1110is greater than the sum of the cross-sectional areas of insert220and groove113by a cross-sectional area of one or more gaps or residual regions1444. In this embodiment, an adhesive can be disposed in channel1110, and at least a portion of the adhesive can be located within residual regions1444. The presence of residual regions1444permits the use of more adhesive in this embodiment ofFIG.14than in the embodiment ofFIGS.2-4and other embodiments. The use of more adhesive can more securely hold insert220within channel1110. As an example, a portion of residual regions1444can have a width of approximately 0.3 mm to 0.1 mm. In this embodiment, the adhesive can be an epoxy. Turning to a different embodiment,FIG.15depicts a cross-sectional view of insert1520, andFIG.16depicts a cross-sectional view of insert1520located within channel210ofFIG.3. This embodiment is also similar to the embodiment ofFIGS.2-4, but in the embodiment ofFIGS.15-16, groove1613is shallower than groove113inFIGS.2-4. FIG.17depicts a cross-sectional view of channel1710of a golf club head, according to a further embodiment, andFIG.18depicts a cross-sectional view of insert1820in this embodiment.FIG.19depicts a cross-sectional view of insert1820located within channel1710. This embodiment is similar to the embodiment ofFIGS.2-4, but many differences between these two embodiments also exist. For example, inFIGS.17-19, channel1710and insert1920are asymmetric about axis290. Also, a portion of channel1720forms a portion of groove113, and a portion of insert1920forms a different portion of groove113. In one example of the illustrated embodiment ofFIG.19, an adhesive is used to couple insert1820to channel1710. In this embodiment, insert1820can comprise a harder or softer material than strike face111of the golf club head body that forms channel1710. For example, if a first edge of groove113that is closer to the sole of the golf club head body typically wears faster than a second edge of groove113that is closer to the top rail of the golf club head body, then insert1820can be located at the first edge of groove113and can comprise a harder material than strike face111. In this example, the first and second edges of groove113can wear more evenly with respect to each other. In a different example, if the first edge of groove113typically wears slower than the second edge of groove113, then insert1820can still be located at the first edge of groove113if insert1820comprises a softer material than the strike face. Other variations are also possible. Referring to the next embodiment,FIG.20depicts a cross-sectional view of insert2020, andFIG.21depicts a cross-sectional view of insert2020located within channel210ofFIG.3. This embodiment is also similar to the embodiment ofFIGS.2-4, but inFIGS.20-21, insert2020comprises two pieces or inserts2021and2022. In the embodiment ofFIGS.20-21, each piece of insert2020forms a different portion of groove2113. Groove2113can be shallower than groove113inFIGS.2and3. In a different embodiment, insert2020can comprise three or more pieces, and each piece of insert2020can form a portion of groove2113. In another embodiment, insert2020can comprise two or more pieces, and one or more of the pieces of insert2020forms groove2113while one or more other pieces of insert2020do not form a portion of groove2113. As another variation, one or more portions of the channel can also form a portion of the groove. Furthermore, the different pieces of insert2020can comprise the same material or can comprise different materials. For example, for reasons similar to those explained with reference to the previous embodiment ofFIGS.17-19, insert2021can comprise a first material, and insert2022can comprise a second material that is harder than the first material. In one example of the illustrated embodiment inFIG.21, an adhesive can be used to couple inserts2021and2022to channel210. Next,FIG.22depicts a cross-sectional view of channel2210of a golf club head, according to another embodiment, andFIG.23depicts a cross-sectional view of insert2320in this embodiment.FIG.24depicts a cross-sectional view of insert2320located within channel2210. This embodiment is also similar to the embodiment ofFIGS.2-4, but in the embodiment ofFIGS.22-24, groove2313comprises a cross section that can be substantially proportional to a cross section of channel2210. In one example of the illustrated embodiment ofFIG.24, an adhesive can be used to couple insert2320to channel2210. FIG.25depicts a cross-sectional view of insert2520according to another embodiment, andFIG.26depicts a cross-sectional view of insert2520located within channel210ofFIG.3. This embodiment is also similar to the embodiment ofFIGS.2-4, but as illustrated inFIGS.25-26, groove2513has a different shape than groove113inFIGS.2and4. Insert2520can comprise convex portions. The convex portions can be positioned opposite to one another and spaced apart from one another. The groove2513formed by the insert2520can comprise concave sides. In one embodiment ofFIGS.25-26, among other embodiments, the material used for insert2520can be softer than the material used for strike face or front face111. In this embodiment,FIG.27depicts a cross-sectional view of insert2520after being worn down such that the edges of grooves2513can become sharper as an individual uses the golf club more. In another embodiment, an upper portion of insert2520can comprise a softer material than a lower portion of insert2520to better control the amount of “wearing down” that insert2520will exhibit over time. As an example, the harder material of the lower portion of insert2520can be similar to the material used for front face111, or the harder material of the lower portion of insert2520can be a different material that is either harder or softer than the material used for front face111. Furthermore, insert2520can be divided into two or more pieces, similar to the embodiment illustrated inFIGS.20and21for reasons similar to those discussed with respect toFIGS.20-21and/orFIGS.17-19. FIG.28depicts a cross-sectional view of channel2810of a golf club head, according to another embodiment, andFIG.29depicts a cross-sectional view of insert220ofFIG.4located within channel2810. This embodiment is similar to the embodiment ofFIGS.2-4, but as illustrated inFIG.29, a gap or residual region2844exists in channel2810after insert220is inserted into channel2810, which is similar to the embodiment ofFIG.14.FIG.30depicts a cross-sectional view of insert220ofFIG.4located within channel3010, which is a different embodiment of channel2810inFIGS.28and29. One or more gaps or residual regions3044and3045exist in channel3010after insert220is inserted into channel3010. FIG.31depicts flow chart3100of a method of manufacturing a golf club according to a further embodiment. As an example, the golf club of flow chart3100can be similar to golf club100inFIG.1. Flow chart3100includes providing a body having a strike face with channels (block3110). As an example, the body of block3110can be similar to golf club head body110inFIG.1, and the strike face and channels of block3110can be similar to front face111and channel210inFIGS.2and3. The channels of block3110can also be similar to the other channels described herein. In one embodiment of block3110, the body can be cast, forged, or machined. In the same or different embodiment of block3100, the strike face of the golf club head body can have one or more channels, and at least one of the channels can be symmetrical or non-symmetrical. The other variations of the body, strike face, and channel(s) described above inFIGS.1-30can also be part of block3110. Flow chart3100also includes providing at least one preform or insert (a block3120). As an example, the at least one preform or insert of block3120can be similar to insert220ofFIGS.2and4. The preform(s) or insert(s) of block3120can also be similar to the other preforms and inserts described herein. The sequence of blocks3110and3120can be reversed. The preform(s) and insert(s) can be extruded, cast, forged, injected, deposited (i.e., vapor deposition, sputtering, etc.), or machined. In one embodiment, the preform(s) or insert(s) are extruded from aluminum. In this embodiment, block3120can include providing the preform(s) or insert(s) to comprise a different material from the material of the strike face. In a different embodiment, block3120can include providing the preform(s) or insert(s) to comprise the same material as the strike face. In the same or different embodiment, the preform(s) or insert(s) of block3120can have a size that is larger than a size of the channels (block3110) into which the preform(s) or insert(s) are inserted. In one embodiment, the preform(s) or insert(s) can have a width that is slightly larger than the width of the channels. For example, the channels can have a substantially constant width of approximately 0.2 mm to 0.010 mm. In this example, a bottom of the insert(s) can have approximately the same width as the width of the channels, but the width of the insert(s) can taper outwardly by approximately 0.5-2.0 degrees (in one embodiment) to slightly and/or gradually increase the width of the insert(s) from the bottom of the insert(s) to the top of the insert(s). Both sides of the insert(s) can be tapered, or only one side of the insert(s) can be tapered. In another example, the insert(s) can have a substantially constant width, and the bottom of the channels can have approximately the same width as the width of the insert(s), but the width of the channels can taper inwardly by approximately 0.5-2.0 degrees (in one embodiment) to slightly and/or gradually decrease the width of the channels from the bottom of the channels to the top of the channels. Both sides of the channels can be tapered, or only one side of the channels can be tapered. In a further example, the insert(s) can taper outwardly from bottom to top, and the channels can taper inwardly from bottom to top. In another example, one or more of the length and width of the inserts and/or the channels is tapered. In a further embodiment, one or more of the length and width of the inserts is not tapered, but is slightly larger than the corresponding length and/or width of the channels by a magnitude that is similar to what is described above for the tapered embodiment. The other variations of the insert(s) and preform(s) described above inFIGS.1-30can also be part of block3120. Flow chart3100continues with an optional heating of the body (optional block3130) and an optional cooling of the insert(s) or preform(s) (optional block3140). The sequence of blocks3130and3140, if used in flow chart3100, can be reversed. In a different embodiment, only one of blocks3130and3140or none of blocks3130or3140is used in flow chart3100. As an example where the channel length is more than three times the channel width, the body and/or strike face can be heated to a temperature of approximately thirty-five to one hundred fifty degrees Celsius above room temperature in block3130, or cooled to a temperature of approximately thirty-five to one hundred fifty degrees Celsius (° C.) below room temperature in block3140. More specifically, in one embodiment where the strike face comprises 17:4 stainless steel (i.e., 17 percent (%) chromium and 4% nickel) and where the insert comprises tungsten carbide, the strike face can be heated to a temperature of approximately 100-150° C. above room temperature. In another embodiment where the strike face comprises 17:4 stainless steel and where the insert comprises aluminum, the strike face can be heated to a temperature of approximately 35-100° C. above room temperature, and the insert can be cooled to a temperature of approximately 5-50° C. below room temperature. In a further embodiment where the strike face comprises 17:4 stainless steel and where the insert comprises stainless steel, the strike face can be heated to a temperature of approximately 35-100° C. above room temperature, and the insert can be cooled to a temperature of approximately 35-100° C. below room temperature. In these examples, the width of the insert or preform can be the same as or slightly larger than the width of the channel while the length of the insert and the channel remain approximately the same. Also in these examples, the insert or preform can be a single piece or multiple pieces. One or more of blocks3130and3140can be particularly useful for the embodiments ofFIGS.8-10,FIGS.11-13, and any of the embodiments with a friction fit. Flow chart3100also includes an optional disposing of an adhesive in the channel(s) (optional block3150). As an example, the adhesive of block3150can be similar to the adhesive described with respect toFIGS.2-4. If used in flow chart3100, block3150can occur before, after, or during, one or more of blocks3130and3140. Next, flow chart3100continues with inserting the preform(s) within the channel(s) (block3160). When blocks3130and3140are used, block3160occurs during or after blocks3130and3140. When block3130is used, block3160can include inserting the insert(s) or preform(s) within the channel(s) before the body cools down to room temperature. When block3140is used, block3160can include inserting the insert(s) or preform(s) within the channel(s) before the body warms up to room temperature. The preform(s) can be secured within the channel(s) by using a friction fit regardless of whether block3150is used in flow chart3100. In some embodiments, after block3160, the channel(s) comprise a single groove, and the insert(s) or preform(s) form at least a portion of the single groove. Also, as explained above, the single groove can be symmetrical. Flow chart3100also includes coupling a shaft to the body (block3170). As an example, the shaft of block3170can be similar to shaft120ofFIG.1. Block3170can occur before, after, or during any of blocks3120,3130,3140,3150, and3160. Although golf club heads with grooves and methods of manufacture thereof have been described with reference to specific embodiments, various changes may be made without departing from the scope of the golf club head with grooves and related methods. Various examples of such changes have been given in the foregoing description. As another example, the shapes and configurations of the channels, inserts, and grooves can vary from the specific shapes and configurations disclosed herein. For instance, the configuration of the channels and inserts can be designed to keep the insert within the channel when a golf ball impacts the strike face, such as where the channel has curved sidewalls and where the insert has complementarily curved sidewalls. As a further example, a golf club head can have more than one shape or configuration of channels and/or inserts while having substantially constant or uniform grooves. Moreover, one or more of the features of one or more embodiments disclosed herein can be combined with some or all of the features of a different embodiment disclosed herein. Accordingly, the disclosure of embodiments is intended to be illustrative of the scope of the application and is not intended to be limiting. It is intended that the scope of this application shall be limited only to the extent required by the appended claims. Therefore, the detailed description of the drawings, and the drawings themselves, disclose at least one preferred embodiment of a golf club head with grooves and methods of manufacture thereof, and may disclose alternative embodiments of the same. Replacement of one or more claimed elements constitutes reconstruction and not repair. Additionally, benefits, other advantages, and solutions to problems have been described with regard to specific embodiments. The benefits, advantages, solutions to problems, and any element or elements that may cause any benefit, advantage, or solution to occur or become more pronounced, however, are not to be construed as critical, required, or essential features or elements of any or all of the claims. Moreover, embodiments and limitations disclosed herein are not dedicated to the public under the doctrine of dedication if the embodiments and/or limitations: (1) are not expressly claimed in the claims; and (2) are or are potentially equivalents of express elements and/or limitations in the claims under the doctrine of equivalents.
32,951
11857851
DETAILED DESCRIPTION OF THE INVENTION The present invention is directed to a golf club head, and particularly a putter head with improved structural support members20. The putter head10comprises a face16, a sole portion12extending from a lower edge18of the face16, and a top or crown portion14extending from an upper edge17of the face16. Though the embodiments herein are directed to a putter head, the novel features disclosed herein may be used in connection with other types of golf club heads, such as drivers, fairway woods, irons, and wedges. In order to attain an optimized design for the support members20, the relationship between curvature, rate of change of curvature, spline length, cross-sectional area, and cross-sectional shape of each structure must be examined. By controlling each of these geometric features, support members20can be created that are much improved over existing prior art support structures within golf club heads. The support members20of the present invention include networks of slender connected elements, and may also be referred to as rods, beams, or ligaments. Each support member20is either connected to another support member20or to the surface of another type of structure, such as a sole portion12or top or crown portion14of the putter head10. In the preferred embodiment shown inFIG.13-15, the support members connect the sole portion12to the crown portion14, but in an alternative embodiment, the support members may attach only to a single surface, such as the face16. Some support members20also have at least one connection to another support member20. At the connection to another support member20, the surfaces22of the support member20have a curvature that changes smoothly and continuously. There are no sharp corners and there are no simple fillets with constant surface curvature. As shown inFIG.9, for each support member20, the equivalent diameter DEis the diameter of a circle42with the same area A as the cross section44of the support member20. The cross section44is taken in the plane46normal to the spline40running through the center of the support member20along its length. The support member20cross section44has an area A, and the equivalent diameter DEis defined as DE=(4*A/pi){circumflex over ( )}(½). The length of the spline40is no less than three times the equivalent diameter DE. The equivalent diameter DEand the cross sectional shape44change continuously along the length of each spline40, but the equivalent diameter DEis always greater than 0.010″ and always less than 1.000″, more preferably 0.050″-0.500,″ and most preferably 0.050″-0.250″. As shown inFIGS.6-9, each spline40is curved, and as illustrated inFIGS.10-11, the curvature continuously changes along the length of the spline40, with specific ranges of curvature and rates of change of curvature. The entire network of support members20occupies a volume30that is no greater than 75% of the enveloping volume50. The enveloping volume50, which is illustrated inFIG.12, is the total volume that could be occupied by support members given the application. When compared with prior art structural members, the support members20disclosed herein (1) are less susceptible to stress concentrations during the use of the structural part or component, (2) allow for improved flow and reduced porosity in investment casting operations, (3) allow for improved flow and reduced porosity in plastic injection molding, metal injection molding, compression molding, (4) are less susceptible to local stress concentrations and cracking during sintering of metal injection molding or 3D printed parts, and (5) are less susceptible to local stress concentrations and cracking during the build process for laser-based 3D printing methods, like binder jetting. The support members20of the present invention also have a unique “organic” appearance that is not found in prior art structural golf club parts. Though the support members20disclosed herein may, in limited circumstances, be manufactured via investment casting, plastic injection molding, compression molding, forging, forming, and metal injection molding, they are preferably formed via 3D printing, and most preferably via binder jetting. A preferred binder jet process100is illustrated inFIGS.16and17, and includes a first step111of spreading layers of powder130evenly across the build plate122of a binder jet machine120; this step can be performed manually or with a re-coater or roller device125. This occurs in the build box121portion of the binder jet machine120, where a build plate122lowers as each layer of powder130is applied. In a second step112, a printer head124deposits liquid binder135on the appropriate regions for each layer of powder130, leaving unbound powder132within the build box121. In a third step113, the binder bonds adjacent powder particles together. In a fourth step114, the first and second steps111,112are repeated as many times as desired by the manufacturer to form a green (unfinished) part140with an intended geometry. In an optional fifth step115, a portion of the binder135is removed using a debinding process, which may be via a liquid bath or by heating the green part to melt or vaporize the binder. In a sixth step116, the green part140is sintered in a furnace, where, at the elevated temperature, the metal particles repack, diffuse, and flow into voids, causing a contraction of the overall part. As this sintering step116continues, adjacent particles eventually fuse together, forming a final part240, examples of which are shown inFIGS.39-44. This process causes 10-25% shrinkage of the part from the green state140to its final form240, and the final part has a void content that is less than 10% throughout. In some embodiments, the debinding and sintering steps115,116may be conducted in the same furnace. In an optional step117, before the binder jet process110begins, optimization software can be used to design a high performance club head or component in CAD. This step allows the manufacturer to use individual player measurements, club head delivery data, and impact location in combination with historical player data and machine learning, artificial intelligence, stochastic analysis, and/or gradient based optimization methods to create a superior club component or head design. Though binder jetting is a powder-based process for additive manufacturing, it differs in key respects from other directed energy powder based systems like DMLS, DMLM, and EBAM. The binder jet process110provides key efficiency and cost saving improvements over DMLM, DMLS, and EBAM that makes it uniquely suitable for use in golf club component manufacturing. For example, binder jetting is more energy efficient because it is not performed at extremely elevated temperatures and is a much less time consuming process, with speeds up to one hundred times faster than DMLS. The secondary debinding step115and sintering step116are batch processes which help keep overall cycle times low, and green parts140can be stacked in a binder jet machine120in three dimensions because the powder is generally self-supporting during the build process, obviating the requirement for supports or direct connections to a build plate. Therefore, because there is no need to remove beams, members, or ligaments because of length, aspect ratio, or overhang angle requirements, lattice structures can take any form and have a much wider range of geometries than are possible when provided by prior art printing methods. The binder jet process110also allows for printing with different powdered materials, including metals and non-metals like plastic. It works with standard metal powders common in the metal injection molding (MIM) industry, which has well-established and readily available powder supply chains in place, so the metal powder used in the binder jet process110is generally much less expensive than the powders used in the DMLS, DMLM, and EBAM directed energy modalities. The improved design freedom, lower cost and faster throughput of binder jet makes it suitable for individually customized club heads, prototypes, and larger scale mass-produced designs for the general public. Lattice Structures The binder jet process110described above allows for the creation of lattice structures, including those with beams that would otherwise violate the standard overhang angle limitation set by DMLM, DMLS, and EBAM. It can also be used to create triply periodic minimal surfaces (TPMS) and non-periodic or non-ordered collections of beams. Compressing or otherwise reducing the size of cells in a section of the lattice increases the effective density and stiffness in those regions. Conversely, expanding the size of the cells is an effective way to intentionally design in a reduction of effective density and stiffness. Effective density is defined as the density of a unit of volume in which a fully dense material may be combined with geometrically designed-in voids, which can be filled with air or another material, and/or with another or other fully dense materials. The unit volume can be defined using a geometrically functional space, such as the lattice cell shown inFIGS.37-38or a three dimensional shape fitted to a typical section, and in particular the volume of a sphere with a diameter that is three to five times the equivalent diameter of the nearest beam or collection of beams. The binder jet process allows for the creation of a structure with a uniform final material density of at least 90%, which contrasts with previous uses of DMLM, DMLS, and EBAM to change the actual material density by purposely creating unstructured porosity in parts. Examples of lattice structures160that can be created using the process10described above are shown inFIGS.18-36, and include warped, twisted, distorted, curved, and stretched lattices that can optimize the structure for any given application. Individual lattice cells170are shown inFIGS.37-38, and may be used in addition to or instead of more complex lattice structures60.FIGS.20,21,24-25,27,31,35and36illustrate what the more complicated structures look like when a 40° overhang limitation is applied: a significant portion of the structure is lost. Another benefit of not having an overhang angle limitation is that manufacturers can create less ordered or non-ordered collections of beams. The lattice structures160shown herein may have repeating cells170or cells with gradual and/or continuously changing size, aspect ratio, skew, and beam diameter. The change rate between adjacent cells170and beams180may be 10%, 25%, 50%, and up to 100%, and this change pattern may apply to all or only some of the volume occupied by the lattice structure. Cell170type can change abruptly if different regions of a component need different effective material properties, but size, aspect ratio, skew, beam diameter can then change continuously as distance from the cell type boundary increases. The diameter of the beams180may be constant or tapered, and while their cross sections are typically circular, they can also be elliptical. Such structures may take the form of a series of connected tetrahedral cells170, as shown inFIGS.29-30. The lack of an overhang constraint allows for the beams80to be oriented in any fashion and therefor allows for the generation of a conformal lattice of virtually any size and shape. Modern meshing software also provide quick and simple method by which to fill volumes and vary the lattice density via non-ordered tetrahedral cells. Tetrahedral cells170are also very useful for varying cell size and shape throughout a part. Lattice Applications in Putter Heads The binder jet process110permits manufacturers to take full advantage of generative design and topology optimization results in putter heads200, as shown inFIGS.39-44. The lattice structures160disclosed herein can be built into their respective golf club heads in one 3D printing step, or may be formed separately from the golf club head and then permanently affixed to the golf club head at a later time. These designs illustrate the kinds of improvements to golf club head center of gravity (CG), moment of inertia (MOI), stress, acoustics (e.g., modal frequencies), ball speed, launch angle, spin rates, forgiveness, and robustness that can be made when manufacturing constraints are removed via the use of optimization software and 3D printing. A preferred embodiment of the present invention is shown inFIGS.39-41. The putter head200of this embodiment includes a body210with a face portion212and a face recess213, a top portion214, and a sole portion216with a sole recess217, a face insert220disposed within the face recess213, and sole weights230,235and a sole insert or puck240affixed within the sole recess217so that the weights230,235are disposed on heel and toe sides of the puck240. The body210of the putter, and particularly the top portion214, is formed of a metal alloy having a first density and has a body CG. The weights230,235are preferably located as far as possible from the body CG and are composed of a metal alloy having a second density greater than the first density. While the hosel218of the embodiment shown inFIGS.39-41is formed integrally with the body210, in other embodiments it may be formed separately from a different material and attached in a secondary step during manufacturing. The puck240is printed using the binder jet process described above from at least one material with a third density that is lower than the first and second densities, and comprises one or more lattice structures260that fill the volume of the sole recess217, freeing up discretionary mass to be used in high-density weighting at other locations on the putter head200, preferably at the heel and toe edges and/or the rear edge215. The materials from which the puck240may be printed include plastic, nylon, polycarbonate, polyetherimide, polyetheretherketone, and polyetherketoneketone. These materials can be reinforced with fibers such as carbon, fiberglass, Kevlar®, boron, and/or ultra-high-molecular-weight polyethylene, which may be continuous or long relative to the size of the part or the putter, or very short. Other putter head200embodiments are shown inFIGS.42-44. In these embodiments, the weights230,235are threaded and are disposed at the rear edge215of the body, on either side and mostly behind the puck240. In the embodiments shown inFIGS.42and44, the pucks240have different lattice patterns160than the one shown inFIGS.39-41, and do not fill the entirety of the sole recess217. In the embodiment shown inFIG.43, the puck240has another lattice pattern160and fills the entirety of the sole recess217. In any of these embodiments, the puck240may be bonded and/or mechanically fixed to the body210. The materials, locations, and dimensions may be customized to suit particular players. In each of these embodiments, the weights230,235preferably are made of a higher density material than the body210, though in other embodiments, they may have an equivalent density or lower density. Moving weight away from the center improves the mass properties of the putter head200, increasing MOI and locating the CG at a point on the putter head200that reduces twist at impact, reduces offline misses, and improves ball speed robustness on mishits. From the foregoing it is believed that those skilled in the pertinent art will recognize the meritorious advancement of this invention and will readily understand that while the present invention has been described in association with a preferred embodiment thereof, and other embodiments illustrated in the accompanying drawings, numerous changes, modifications and substitutions of equivalents may be made therein without departing from the spirit and scope of this invention which is intended to be unlimited by the foregoing except as may appear in the following appended claims. Therefore, the embodiments of the invention in which an exclusive property or privilege is claimed are defined in the following appended claims.
16,074
11857852
DETAILED DESCRIPTION Disclosed below are representative embodiments that are not intended to be limiting in any way. Instead, the present disclosure is directed toward novel and nonobvious features, aspects and equivalents of the embodiments of the golf club information system described below. The disclosed features and aspects of the embodiments can be used alone or in various novel and nonobvious combinations and sub-combinations with one another. Now with reference to an illustrative drawing, and particularlyFIG.1, there is shown a kit20having a driving tool, i.e., torque wrench22, and a set of weights24usable with a golf club head having conforming recesses, including, for example, weight assemblies30and weight screws23, and an instruction wheel26. In one particular embodiment, a golf club head28includes four recesses, e.g., weight ports96,98,102,104, disposed about the periphery of the club head (FIGS.2-5). In the illustrated embodiment ofFIGS.2-5, four weights24are provided; two weight assemblies30of about ten grams (g) and two weight screws32of about two grams (g). Varying placement of the weights within ports96,98,102, and104enables the golfer to vary launch conditions of a golf ball struck by the club head28, for optimum distance and accuracy. More specifically, the golfer can adjust the position of the club head's center of gravity (CG), for greater control over the characteristics of launch conditions and, therefore, the trajectory and shot shape of the struck golf ball. The instruction wheel26aids the golfer in selecting a proper weight configuration for achieving a desired effect to the trajectory and shape of the golf shot. In some embodiments, the kit20provides six different weight configurations for the club head28, which provides substantial flexibility in positioning CG of the club head28. Generally, the CG of a golf club head is the average location of the weight of the golf club head or the point at which the entire weight of the golf club head may be considered as concentrated so that if supported at this point the head would remain in equilibrium in any position. In the illustrated embodiment ofFIGS.15and16, the CG169of club head28can be adjustably located in an area adjacent to the sole having a length of about five millimeters measured from front-to-rear and width of about five millimeters measured from toe-to-heel. In another embodiment illustrated inFIGS.20-22, a golf club head220includes four recesses, e.g., weight ports222,228,230,232, disposed about the periphery of the club head220. In another embodiment illustrated inFIGS.23-25, a golf club head320includes four recesses, e.g., weight ports322,328,330,332, disposed about the periphery of the club head320. In the illustrated embodiments ofFIGS.20-25, twelve weights, such as the weights24that include weight assemblies and weight screws may be provided; three weight assemblies of about one gram, four weight assemblies of about five and a quarter grams, one weight assembly of about six and a half grams, two weight assemblies of about nine and a half grams, one weight assembly of about twelve and a half grams, and one weight assembly of about eighteen grams. Varying placement of the weights within the ports222,228,230,232enables the golfer to vary launch conditions of a golf ball struck by the club head220, to provide a selected distance, spin rate, trajectory, or other shot characteristic or shot shape. Likewise, varying placement of the weights within ports322,328,330,332enables the golfer to vary launch conditions of a golf ball struck by club head320. More specifically, the golfer can adjust the position of club head center of gravity (CG) vertically and horizontally for greater control of launch conditions and, therefore, the trajectory, spin-rate, or shot shape of the struck golf ball. In some embodiments, the golfer may adjust the launch angle while maintaining a relatively constant spin-rate. In other embodiments, the golfer may adjust the spin-rate while maintaining a relatively constant launch angle. In some embodiments, the kit20provides different weight configurations for the club head320, which provide additional flexibility in positioning the CG of the club head320. The CG of club head320can be adjustably located in a volume above the sole having a length of about seven millimeters measured from front-to-rear, a width of about five millimeters measured from toe-to-heel, and a height of about seven millimeters measured from crown-to-sole. The instruction wheel26shown inFIG.1can aid the golfer in selecting a proper weight configuration for the club head320for achieving a desired effect to the trajectory and shape of the golf shot. Each configuration can deliver different launch conditions, including ball launch angle, dynamic loft, spin-rate and the club head alignment at impact, as discussed in detail below. As shown inFIGS.2-5, the weights24can be sized to be securely received in any of the four ports96,98,102,104of club head28and are secured in place using the torque wrench22. The weights24can also be sized to be securely received in any of the four ports222,228,230,232of club head220and secured in place using the torque wrench22. In some embodiments, the weights24are sized to be securely received in any of the four ports322,328,330,332of club head320and secured in place using the torque wrench22. Each of the weight assemblies30(FIGS.10-12) includes a mass element34, a fastener, e.g., screw36, and a retaining element38. In an exemplary embodiment, the weight assemblies30are preassembled; however, component parts can be provided for assembly by the user. For weights having a total mass between about one gram and about two grams, weight screws32without a mass element can be used (FIG.9). The weight screws32can be formed of stainless steel, and the head120of each weight screw32preferably has a diameter sized to conform to the four ports322,328,330,332of the club head320, or alternatively to conform to the four ports222,228,230,232of the club head220. The kit20can be provided with a golf club at purchase, or sold separately. For example, a golf club can be sold with the torque wrench22, the instruction wheel26, and the weights24(e.g., two 10-gram weights30and two 2-gram weights32) preinstalled. Kits20having an even greater variety of weights can also be provided with the club, or sold separately. In another embodiment, a kit20having eight weight assemblies is contemplated, e.g., a 2-gram weight, four 6-gram weights, two 14-gram weights, and an 18-gram weight. Such a kit20may be particularly effective for golfers with a fairly consistent swing, by providing additional precision in weighting the club head28. In another embodiment, the kit20may have twelve weight assemblies, e.g., three 1-gram weights, one 6.5-gram weight, four 5.25-gram weights, two 9.5-gram weights, one 12.5-gram weight, and one 18-gram weight. Such a kit may be preferred for golfers who prefer to adjust, in a relatively independent manner, the spin-rate and launch angle of a golf ball struck by a golf club head320. Such a kit may also provide three-dimensional adjustment of the center of gravity of the golf club head320. In addition, weights in prescribed increments across a broad range can be available. For example, weights24in one gram increments ranging from one gram to twenty-five grams can provide very precise weighting, which would be particularly advantageous for advanced and professional golfers. In such embodiments, weight assemblies30ranging between five grams and ten grams preferably use a mass element34comprising primarily a titanium alloy. Weight assemblies30, ranging between ten grams to over twenty-five grams, preferably use a mass element34comprising a tungsten-based alloy, or blended tungsten alloys. Other materials, or combinations thereof, can be used to achieve a desired weight mass. However, material selection should consider other requirements such as durability, size restraints, and removability. Instruction Wheel With reference now toFIG.6, the instruction wheel26aids the golfer in selecting a club head weight configuration to achieve a desired effect on the motion path of a golf ball struck by the golf club head28. The instruction wheel26provides a graphic, in the form of a motion path chart39on the face of instruction wheel26to aid in this selection. The motion path chart's y-axis corresponds to the height control of the ball's trajectory, generally ranging from low to high. The x-axis of the motion path chart corresponds to the directional control of the ball's shot shape, ranging from left to right. In an exemplary embodiment, the motion path chart39identifies six different weight configurations40. Each configuration is plotted as a point on the motion path chart39. Of course, other embodiments can include a different number of configurations, such as, for kits having a different variety of weights. Also, other approaches for presenting instructions to the golfer can be used, for example, charts, tables, booklets, and so on. The six weight configurations of this exemplary embodiment are listed below in Table 1. TABLE 1Config.Weight DistributionNo.DescriptionFwd ToeRear ToeFwd HeelRear Heel1High2 g10 g2 g10 g2Low10 g2 g10 g2 g3More Left2 g2 g10 g10 g4Left2 g10 g10 g2 g5Right10 g2 g2 g10 g6More Right10 g10 g2 g2 g Each weight configuration (i.e., 1 through 6) corresponds to a particular effect on launch conditions and, therefore, a struck golf ball's motion path. In the first configuration, the club head CG is in a center-back location, resulting in a high launch angle and a relatively low spin-rate for optimal distance. In the second configuration, the club head CG is in a center-front location, resulting in a lower launch angle and lower spin-rate for optimal control. In the third configuration, the club head CG is positioned to induce a draw bias. The draw bias is even more pronounced with the fourth configuration. Whereas, in the fifth and sixth configurations, the club head CG is positioned to induce a fade bias, which is more pronounced in the sixth configuration. In use, the golfer selects, from the various motion path chart descriptions, the desired effect on the ball's motion path. For example, if hitting into high wind, the golfer may choose a golf ball motion path with a low trajectory, (e.g., the second configuration). Or, if the golfer has a tendency to hit the ball to the right of the intended target, the golfer may choose a weight configuration that encourages the ball's shot shape to the left (e.g., the third and fourth configurations). Once the configuration is selected, the golfer rotates the instruction wheel26until the desired configuration number is visible in the center window42. The golfer then reads the weight placement for each of the four locations through windows48,50,52,53, as shown in the graphical representation44of the club head28. The motion path description name is also conveniently shown along the outer edge55of the instruction wheel26. For example, inFIG.6, the instruction wheel26displays weight positioning for the “high” trajectory motion path configuration, i.e., the first configuration. In this configuration, two 10-gram weights are placed in the rear ports96,98and two 2-gram weights are placed in the forward ports102,104(FIG.2). If another configuration is selected, the instruction wheel26depicts the corresponding weight distribution, as provided in Table 1, above. In another embodiment, a kit similar to the kit20may provide an instruction wheel to aid the golfer in selecting a club head weight configuration to achieve a desired effect on the motion path of a golf ball struck by the golf club head320. Such an instruction wheel may identify eleven different weight configurations. Of course, other embodiments can include a different number of configurations, such as, for kits having a different variety of weights. Also, other approaches for presenting instructions to the golfer can be used, for example, charts, tables, booklets, and so on. The eleven weight configurations of an exemplary embodiment are listed below in Table 2A and weight ranges for additional examples are listed in Tables 2B-2C. TABLE 2AConfig.Back LowBack HighFront HeelFront ToeNo.Description(g)(g)(g)(g)1High, Neutral 1181112High, Neutral 29.59.5113High Neutral 3118114High Draw12.516.515High Fade12.5116.56Mid Neutral5.255.255.255.257Mid Draw19.59.518Mid Fade9.5119.59Low Neutral119.59.510Low Draw1118111Low Fade11118 TABLE 2BConfig.Back LowBack HighFront HeelFront ToeNo.Description(g)(g)(g)(g)1High Neutral 114.4 to 21.60.8 to 1.20.8 to 1.20.8 to 1.22High Neutral 27.6 to 11.47.6 to 11.40.8 to 1.20.8 to 1.23High Neutral 30.8 to 1.214.4 to 21.60.8 to 1.20.8 to 1.24High Draw10 to 150.8 to 1.25.2 to 7.80.8 to 1.25High Fade10 to 150.8 to 1.20.8 to 1.25.2 to 7.86Mid Neutral4.2 to 6.34.2 to 6.34.2 to 6.34.2 to 6.37Mid Draw0.8 to 1.27.6 to 11.47.6 to 11.40.8 to 1.28Mid Fade7.6 to 11.40.8 to 1.20.8 to 1.27.6 to 11.49Low Neutral0.8 to 1.20.8 to 1.27.6 to 11.47.6 to 11.410Low Draw0.8 to 1.20.8 to 1.214.4 to 21.60.8 to 1.211Low Fade0.8 to 1.20.8 to 1.20.8 to 1.214.4 to 21.6 TABLE 2CConfig.Back LowBack HighFront HeelFront ToeNo.Description(g)(g)(g)(g)1High Neutral 116.2 to 19.80.9 to 1.10.9 to 1.10.9 to 1.12High Neutral 28.5 to 10.58.5 to 10.50.9 to 1.10.9 to 1.13High Neutral 30.9 to 1.116.2 to 19.80.9 to 1.10.9 to 1.14High Draw11.3 to 13.80.9 to 1.15.8 to 7.20.9 to 1.15High Fade11.3 to 13.80.9 to 1.10.9 to 1.15.8 to 7.26Mid Neutral4.7 to 5.84.7 to 5.84.7 to 5.84.7 to 5.87Mid Draw0.9 to 1.18.5 to 10.58.5 to 10.50.9 to 1.18Mid Fade8.5 to 10.50.9 to 1.10.9 to 1.18.5 to 10.59Low Neutral0.9 to 1.10.9 to 1.18.5 to 10.58.5 to 10.510Low Draw0.9 to 1.10.9 to 1.116.2 to 19.80.9 to 1.111Low Fade0.9 to 1.10.9 to 1.10.9 to 1.116.2 to 19.8 Each weight configuration (i.e., configurations 1 through 11) corresponds to a particular effect on launch conditions such as launch angle, spin-rate, and loft. Adjustments to these conditions tend to affect the shot-shape and the trajectory of the struck golf ball. In the first configuration, the club head CG is in a low-back location, resulting in a very high launch angle and low spin-rate. The launched ball tends to have a high trajectory when this configuration is chosen. In the second configuration, the club head CG is in a central-back location, resulting in a high launch angle, a moderate spin-rate, and high ball velocity. In the third configuration, the club head CG is in a high-back location, resulting in a low launch angle and a very high spin-rate. The launched ball tends to have a lower trajectory when this configuration is chosen. In the fourth configuration, the club head CG is in a low-back location and towards the heel to induce a strong draw bias with a very high launch angle and a low spin-rate. In the fifth configuration, the club head CG is in a low-back location and towards the toe to induce a strong fade bias with a very high launch angle and a low spin-rate. In the sixth configuration, the club head CG is positioned in a middle neutral position, resulting in a moderate to low launch angle, moderate spin, and high ball velocity. In the seventh configuration, the club head CG is positioned high-center and towards the heel. These launch conditions induce a moderate draw bias with high spin. In the eighth configuration, the club head CG is positioned low-center and towards the toe. These launch conditions induce a moderate fade bias with high launch angle. In the ninth configuration, the club head CG is positioned in a low-front location, resulting in a moderate launch angle and a moderate to low spin-rate. In the tenth configuration, the club head CG is in a low-front location to induce a draw bias, resulting in a moderate launch angle and a moderate spin-rate. In the eleventh configuration, the club head CG is in a low-front location to induce a fade bias, resulting in a moderate launch angle and moderate spin-rate. In use, the golfer selects, from the various motion path descriptions, a desired effect on the ball's motion path. For example, if hitting into high wind, the golfer may choose a golf ball motion path with a lower trajectory and a lower spin-rate, (e.g., the ninth configuration). Or, if the golfer has a tendency to hit the ball to the right of the intended target, the golfer may choose a weight configuration that encourages the ball's shot shape to the left (e.g., the fourth, seventh, or tenth configurations). Once the configuration is selected, the golfer determines the weight configurations in a similar manner as with instruction wheel26. If, for example, the fourth configuration of Table 2A is chosen for the exemplary golf club head320shown inFIGS.23-25, a 12.5-gram weight is placed in the rear-low port330, a 6.5-gram weight is placed in the front-heel port328, a 1-gram weight is placed in the rear-high port322, and a 1-gram weight is placed in the front-toe port332. If another configuration is selected, the instruction wheel depicts the corresponding weight distribution as provided in Tables 2A-2C above. The weight distributions described in Tables 2A-2C allow the golfer to adjust both launch angle and spin. Under some circumstances, the golfer may be able to adjust the launch angle and the spin relatively independently of each other to achieve optimal launch conditions. For example, a golfer may configure a golf club head320according to the sixth configuration in Table 2A. The golfer may then determine that the golf ball trajectory would improve if the spin-rate could be increased while the launch angle remained relatively constant. Such an outcome may result if the golfer then adjusted the weights in the golf club head320according to the third configuration. Torque Wrench With reference now toFIGS.7-8, the torque wrench22includes a grip54, a shank56, and a torque-limiting mechanism (not shown). The grip54and shank56generally form a T-shape; however, other configurations of wrenches can be used. The torque-limiting mechanism is disposed between the grip54and the shank56, in an intermediate region58, and is configured to prevent over-tightening of the weights24into weight ports such as ports96,98,102,104or such as ports222,228,230,232. In use, once the torque limit is met, the torque-limiting mechanism of the exemplary embodiment will cause the grip54to rotationally disengage from the shank56. In this manner, the torque wrench22inhibits excessive torque on the weight24being tightened. Preferably, the wrench22is limited to between about twenty inch-lbs and forty inch-lbs of torque. More preferably, the limit is between twenty-seven inch-lbs and thirty-three inch-lbs of torque. In an exemplary embodiment, the wrench22is limited to about thirty inch-lbs of torque. Of course, wrenches having various other types of torque-limiting mechanisms, or even without such mechanisms, can be used. However, if a torque-limiting mechanism is not used, care should be taken not to over-tighten the weights24. The shank56terminates in an engagement end, i.e., tip60, configured to operatively mate with the weight screws32and the weight assembly screws36(FIGS.9-11). The tip60includes a bottom wall62and a circumferential side wall64. As shown inFIGS.10and11, the head of each of the weight screws32and weight assembly screws36define a socket124and66, respectively, having a complementary shape to mate with the tip60. The side wall64of the tip60defines a plurality of lobes68and flutes70spaced about the circumference of the tip. The multi-lobular mating of the wrench22and the sockets66and124ensures smooth application of torque and minimizes damage to either device (e.g., stripping of tip60or sockets66,124). The bottom wall62of the tip66defines an axial recess72configured to receive a post74disposed in sockets66and124. The recess72is cylindrical and is centered about a longitudinal axis of the shank56. With reference now toFIG.8, the lobes68and flutes70are spaced equidistant about the tip60, in an alternating pattern of six lobes and six flutes. Thus, adjacent lobes68are spaced about 60 degrees from each other about the circumference of the tip60. In the exemplary embodiment, the tip60has an outer diameter (dlobes), defined by the crests of the lobes68, of about 4.50 mm, and trough diameter (dflutes) defined by the troughs of the flutes70, of about 3.30 mm. The axial recess has a diameter (drecess) of about 1.10 mm. Each socket66,124is formed in an alternating pattern of six lobes90that complement the six flutes70of the wrench tip60. Weights Generally, as shown inFIGS.1and9-12, weights24, including weight assemblies30and weight screws32, are non-destructively movable about or within a golf club head. In specific embodiments, the weights24can be attached to the club head, removed, and reattached to the club head without degrading or destroying the weights or the golf club head. In other embodiments, the weights24are accessible from an exterior of the golf club head. With reference now toFIG.9, each weight screw32has a head120and a body122with a threaded portion128. The weight screws32are preferably formed of titanium or stainless steel, providing a weight with a low mass that can withstand forces endured upon impacting a golf ball with the club head. In the exemplary embodiment, the weight screw32has an overall length (Lo) of about 18.3 mm and a mass of about two grams. In other embodiments, the length and composition of the weight screw32can be varied to satisfy particular durability and mass requirements. The weight screw head120is sized to enclose one of the corresponding weight ports96,98,102,104(FIG.2) of the club head28, such that the periphery of the weight screw head120generally abuts the side wall of the port. This helps prevent debris from entering the corresponding port. Alternatively, the weight screw head120can be sized to enclose one of the corresponding weight ports222,228,230,232of the club head220. Preferably, the weight screw head120has a diameter ranging between about 11 mm and about 13 mm, corresponding to weight port diameters of various exemplary embodiments. In this embodiment, the weight screw head120has a diameter of about 12.3 mm. The weight screw head120defines a socket124having a multi-lobular configuration sized to operatively mate with the wrench tip60. The body122of the weight screw32includes an annular ledge126located in an intermediate region thereof. The ledge126has a diameter (dledge) greater than that of the threaded openings110defined in the ports96,98,102,104of the club head28(FIG.2), thereby serving as a stop when the weight screw32is tightened. In the embodiment, the annular ledge126is a distance (La) of about 11.5 mm from the weight screw head120and has a diameter (da) of about 6 mm. The weight screw body122further includes a threaded portion128located below the annular ledge126. In this embodiment, M5×0.6 threads are used. The threaded portion128of the weight screw body122has a diameter (dt) of about 5 mm and is configured to mate with the threaded openings110defined in the ports96,98,102,104of the club head28. Alternatively, the threaded portion128of the weight screw body122is configured to mate with the threaded openings236defined in the ports222,228,230,232of the club head220. With reference now toFIGS.10-12, each mass element34of the weight assemblies30defines a bore78sized to freely receive the weight assembly screw36. As shown inFIG.12, the bore78includes a lower non-threaded portion and an upper threaded portion. The lower portion is sufficiently sized to freely receive a weight assembly screw body80, while not allowing the weight assembly screw head82to pass. The upper portion of the bore78is sufficiently sized to allow the weight assembly screw head82to rest therein. More particularly, the weight assembly screw head82rests upon a shoulder84formed in the bore78of the mass element34. Also, the upper portion of the bore78has internal threads86for securing the retaining element38. In constructing the weight assembly30, the weight assembly screw36is inserted into the bore78of the mass element34such that the lower end of the weight assembly screw body80extends out the lower portion of the bore78and the weight assembly screw head82rests within the upper portion of the bore78. The retaining element38is then threaded into the upper portion of the bore78, thereby capturing the weight assembly screw36in place. A thread locking compound can be used to secure the retaining element38to the mass element34. The retaining element38defines an axial opening88, exposing the socket66of the weight assembly screw head82and facilitating engagement of the wrench tip60in the socket66of the weight assembly screw36. As mentioned above, the side wall of the socket66defines six lobes90that conform to the flutes70(FIG.8) of the wrench tip60. The cylindrical post74of the socket66is centered about a longitudinal axis of the screw36. The post74is received in the axial recess72(FIG.8) of the wrench22. The post74facilitates proper mating of the wrench22and the weight assembly screw36, as well as inhibiting use of non-compliant tools, such as Phillips screwdrivers, Allen wrenches, and so on. Club Head As illustrated inFIGS.2-5andFIGS.20-25, the golf club heads28,220,320include bodies92,292,392, respectively. The body can include a crown141, sole143, skirt145and face plate148defining an interior cavity150. The body further includes a heel portion151, toe portion153and rear portion155. The crown141is defined as an upper portion of the golf club head above a peripheral outline of the head including the top of the face plate148. The sole143includes a lower portion of the golf club head extending upwards from a lowest point of the club head when the club head is ideally positioned, i.e., at a proper address position. For a typical driver, the sole143extends upwards approximately 15 mm above the lowest point when the club head is ideally positioned. For a typical fairway wood, the sole143extends upwards approximately 10 mm to about 12 mm above the lowest point when the club head is ideally positioned. A golf club head, such as the club head28, can be ideally positioned when angle163measured between a plane tangent to an ideal impact location on the face plate and a perfectly vertical plane relative to the ground is approximately equal to the golf club head loft and when the golf club head lie angle is approximately equal to an angle between a longitudinal axis of the hosel or shaft and the ground161. The ideal impact location is disposed at the geometric center of the face plate. The sole143can also include a localized zone189proximate the face plate148having a thickness between about 1 mm and about 3 mm, and extending rearwardly away from the face plate a distance greater than about 5 mm. The skirt145is defined as a side portion of the golf club head between the crown and the sole that extends across a periphery of the golf club head, excluding the face plate, from the toe portion153, around the rear portion155, to the heel portion151. The crown141, sole143and skirt145can be integrally formed using techniques such as molding, cold forming, casting, and/or forging and the face plate148can be attached to the crown, sole and skirt by means known in the art. Furthermore, the body92can be made from various metals (e.g., titanium alloys, aluminum alloys, steel alloys, magnesium alloys, or combinations thereof), composite material, ceramic material, or combinations thereof. The face plate148is positioned generally at a front portion of the golf club head. The golf club head of the present application can include one or more weight ports. For example, according to some embodiments, and as shown inFIGS.2-5, the golf club head28can include the four weight ports96,98,102and104formed in the club head. In other embodiments, a golf club head can include less or more than four weight ports. For example, in some embodiments, as shown inFIG.13, golf club head130can have three weight ports131. In still other embodiments, as shown inFIG.14, golf club head136can have two weight ports137. In other embodiments, and as shown inFIGS.20-22, the golf club head220can include the four weight ports222,228,230,232formed in the club head. In still other embodiments, as shown inFIGS.23-25, the golf club head320can include the four weight ports322,328,330,332formed in the club head. Weight ports can be generally described as a structure coupled to the golf club head crown, golf club head skirt, golf club head sole or any combination thereof that defines a recess, cavity or hole on, about or within the golf club head. Exemplary of weight ports of the present application, weight ports96,98,102, and104ofFIGS.2-5include a weight cavity116and a port bottom108. The ports have a weight port radial axis167defined as a longitudinal axis passing through a volumetric centroid, i.e., the center of mass or center of gravity, of the weight port. The port bottom108defines a threaded opening110for attachment of the weights24. The threaded opening110is configured to receive and secure the threaded body80of the weight assembly30and threaded body122of the weight screw32. In this embodiment, the threaded bodies80and122of the weight assembly30and weight screw32, respectively, have M5×0.6 threads. The threaded opening110may be further defined by a boss112extending either inward or outward relative to the weight cavity116. Preferably, the boss112has a length at least half the length of the body80of the screw36and, more preferably, the boss has a length 1.5 times a diameter of the body of the screw. As depicted inFIG.5, the boss112extends outward, relative to the weight cavity116and includes internal threads (not shown). Alternatively, the threaded opening110may be formed without a boss. As depicted inFIG.5, the weight ports can include fins or ribs114having portions disposed about the ports96,98,102and104, and portions formed in the body to provide support within the club head and reduce stresses on the golf club head walls during impact with a golf ball. In the embodiment shown inFIGS.2-5, the weights24are accessible from the exterior of the club head28and securely received into the ports96,98,102, and104. The weight assemblies30preferably stay in place via a press fit while the weights32are generally threadably secured. Weights24are configured to withstand forces at impact, while also being easy to remove. In another embodiment, the weight ports222,230,228ofFIGS.20-22include weight cavities242,243,244and port bottoms264,265,266, respectively. (The weight port232is similarly configured.) The ports have weight port radial axes254,255,256. The port bottoms264,265,266define respective threaded openings236for attachment of weight assemblies224. The threaded openings236are configured to receive and secure assembly screw bodies280of the weight assemblies224or threaded bodies of weight screws, or other weights. In this embodiment, the threaded bodies280have M5×0.8 threads. The threaded openings236may be further defined by bosses238extending either inward or outward relative to the weight cavities242,243,244. Preferably, the bosses238have a length at least half the length of the assembly screw body280and, more preferably, the bosses have a length 1.5 times a diameter of the body of the screw. As depicted inFIG.22, the bosses238extend outward, relative to the weight cavities242,243,244and include internal threads. Alternatively, the threaded openings236may be formed without a boss. As depicted inFIG.22, the weight ports can include fins or ribs240having portions disposed about the ports222,228,230,232, and portions formed in the body to provide support within the club head and reduce stresses on the golf club head walls during impact with a golf ball. In the embodiment shown inFIGS.20-22, the weight assemblies224are accessible from the exterior of the club head220and securely received into the ports222,228,230,232. The weight assemblies224are generally threadably secured into the ports222,228,230,232. In other examples, the weight assemblies224may be retained via a press fit. Weight assemblies224are configured to withstand forces at impact, while also being easy to remove. In some embodiments, four or more weights may be provided as desired. Yet in other embodiments, a golf club head can have fewer than four weights. For example, as shown inFIG.13, golf club head130can have three weights132positioned around the golf club head130and, as shown inFIG.14, golf club head136can have two weights138positioned around the golf club head136. In some embodiments, each weight132and weight138can be a weight assembly or weight screw, such as the weight assembly30or weight screw32. To attach a weight assembly, such as weight assembly30, in a port of a golf club head, such as the golf club head28, the threaded body30of the screw36is positioned against the threaded opening110of the port. With the tip60of the wrench22inserted through the aperture88of the retaining element38and engaged in the socket66of the screw36, the user rotates the wrench to screw the weight assembly in place. Pressure from the engagement of the screw36provides a press fit of the mass element34to the port, as sides of the mass element slide tightly against a wall of the weight cavity116. The torque limiting mechanism of the wrench prevents over-tightening of the weight assembly30. Weight assemblies30are also configured for easy removal, if desired. To remove, the user mates the wrench22with the weight assembly30and unscrews it from a club head. As the user turns the wrench22, the head82of the screw36applies an outward force on the retaining element38and thus helps pull out the mass element34. Low-friction material can be provided on surfaces of the retaining element38and the mass element34to facilitate free rotation of the head82of the weight assembly screw36with respect to the retaining element38and the mass element34. Similarly, a weight screw, such as weight screws32, can be attached to the body through a port by positioning the threaded portion of weight32against the threaded opening110of the port. The tip of the wrench can be used to engage the socket of the weight by rotating the wrench to screw the weight in place. Attachment and removal of weights assemblies and weight screws is performed in a similar manner for other golf club head embodiments with one or more weight ports, such as the golf club head220and the golf club head320. A. Mass Characteristics A golf club head of the present application has a head mass defined as the combined masses of the body, weight ports and weights. The body mass typically includes the combined masses of the crown, sole, skirt and face plate, or equivalently, the head mass minus the total weight port mass and the total weight mass. The total weight mass is the combined masses of the weight or weights installed on a golf club head. The total weight port mass is the combined masses of the weight ports and any weight port supporting structures, such as fins114shown inFIG.5. In several embodiments, one weight port, including any weight port supporting structures, can have a mass between about 1 gram and about 12 grams. A golf club head having two weight ports may have a total weight port mass between about 2 grams and about 24 grams; a golf club head having three weight ports may have a total weight port mass between about 3 grams and about 36 grams; and a golf club head having four weight ports may have a total weight port mass between about 4 grams and about 48 grams. In several embodiments of the golf club head, the sum of the body mass and the total weight port mass is between about 80 grams and about 222 grams. In more specific embodiments, the sum of the body mass and the total weight port mass is between about 80 grams and about 210 grams. In other embodiments, the sum of the body mass and the total weight port mass is less than about 205 grams or less than about 215 grams. In some embodiments of the golf club head with two weight ports and two weights, the sum of the body mass and the total weight port mass can be between about 180 grams and about 222 grams. More specifically, in certain embodiments the sum of the body mass and the total weight port mass is between about 180 grams and about 215 grams or between about 198 grams and about 222 grams. In specific embodiments of the golf club head28,130with three weight ports132and three weights131or four weight ports96,98,102,104and four weights24, the sum of the body mass and the total weight port mass is between about 191 grams and about 211 grams. In the embodiments ofFIGS.20-25, the sum of the body mass and the total weight port mass is similar. Each weight has a weight mass. In several embodiments, each weight mass can be between about 1 gram and about 100 grams. In specific embodiments, a weight mass can be between about 5 grams and about 100 grams or between about 5 grams and about 50 grams. In other specific embodiments, a weight mass can be between about 1 gram and about 3 grams, between about 1 gram and about 18 grams or between about 6 grams and about 18 grams. In some embodiments, the total weight mass can be between about 5 grams and about 100 grams. In more specific embodiments, the total weight mass can be between about 5 grams and about 100 grams or between about 50 grams and about 100 grams. B. Volume Characteristics The golf club head of the present application has a volume equal to the volumetric displacement of the club head body. In other words, for a golf club head with one or more weight ports within the head, it is assumed that the weight ports are either not present or are “covered” by regular, imaginary surfaces, such that the club head volume is not affected by the presence or absence of ports. In several embodiments, a golf club head of the present application can be configured to have a head volume between about 110 cm3and about 600 cm3. In more particular embodiments, the head volume is between about 250 cm3and about 500 cm3. In yet more specific embodiments, the head volume is between about 300 cm3and about 500 cm3, between 300 cm3and about 360 cm3, between about 360 cm3and about 420 cm3or between about 420 cm3and about 500 cm3. In embodiments having a specific golf club head weight and weight port configuration, or thin-walled construction as described in more detail below, the golf club can have approximate head volumes as shown in Table 3 below. TABLE 3OneTwoThreeFourWeight/TwoWeights/TwoWeights/ThreeWeights/FourThin SoleThin SkirtWeight PortsWeight PortsWeight PortsWeight PortsConstructionConstruction(cm3)(cm3)(cm3)(cm3)(cm3)(cm3)180-600110-210360-460360-460≤500≥205385-600180-600250-600400-500440-460385-600 The weight port volume is measured as the volume of the cavity formed by the port where the port is “covered” by a regular, imaginary surface as described above with respect to club head volume. According to several embodiments, a golf club head of the present invention has a weight port with a weight port volume between about 0.9 cm3and about 15 cm3. The total weight port volume is measured as the combined volumes of the weight ports formed in a golf club head. According to some embodiments of a golf club head of the present application, a ratio of the total weight port volume to the head volume is between about 0.001 and about 0.05, between about 0.001 and about 0.007, between about 0.007 and about 0.013, between about 0.013 and about 0.020 or between about 0.020 and about 0.05. C. Moments of Inertia Golf club head moments of inertia are typically defined about axes extending through the golf club head CG. As used herein, the golf club head CG location can be provided with reference to its position on a golf club head origin coordinate system. According to several embodiments, one of which is illustrated inFIGS.16and17, a golf club head origin170is represented on golf club head28. The golf club head origin170is positioned on the face plate148at approximately the geometric center, i.e., the intersection of the midpoints of a face plate's height and width. For example, as shown inFIG.17, the head origin170is positioned at the intersection of the midpoints of the face plate height178and width180. As shown inFIGS.16and17, the head origin coordinate system, with head origin170, includes an x-axis172and a y-axis174(extending into the page inFIG.17). The origin x-axis172extends tangential to the face plate and generally parallel to the ground when the head is ideally positioned with the positive x-axis extending from the origin170towards a heel152of the golf club head28and the negative x-axis extending from the origin to the toe of the golf club head. The origin y-axis174extends generally perpendicular to the origin x-axis and parallel to the ground when the head is ideally positioned with the positive y-axis extending from the origin170towards the rear portion155of the golf club. The head origin can also include an origin z-axis176extending perpendicular to the origin x-axis and the origin y-axis and having a positive z-axis that extends from the origin170towards the top portion of the golf club head28and a negative z-axis that extends from the origin towards the bottom portion of the golf club head. A moment of inertia about a golf club head CG x-axis201(seeFIGS.15and16), i.e., an axis extending through the golf club head CG169and parallel to the head origin x-axis172, is calculated by the following equation ICGx=∫(y2+z2)dm(1) where y is the distance from a golf club head CG xz-plane to an infinitesimal mass dm and z is the distance from a golf club head CG xy-plane to the infinitesimal mass dm. The golf club head CG xz-plane is a plane defined by the golf club head CG x-axis201and a golf club head CG z-axis203(seeFIG.15), i.e., an axis extending through the golf club head CG169and parallel to the head origin z-axis176as shown inFIG.17. The CG xy-plane is a plane defined by the CG x-axis201and a golf club head CG y-axis (not shown), i.e., an axis extending through the golf club head CG and parallel to the head origin y-axis. Similarly, a moment of inertia about the golf club head CG z-axis203is calculated by the following equation ICGz=∫(x2+y2)dm(2) where x is the distance from a golf club head CG yz-plane to an infinitesimal mass dm and y is the distance from the golf club head CG xz-plane to the infinitesimal mass dm. The golf club head CG yz-plane is a plane defined by the golf club head CG y-axis and the golf club head CG z-axis203. As used herein, the calculated values for the moments of inertia about the golf club head CG x-axis201and z-axis203are based on a golf club head with a body, at least one weight port coupled to the body and at least one installed weight. 1. Moments of Inertia about CG X-Axis In several embodiments, the golf club head of the present invention can have a moment of inertia about the golf club head CG x-axis201between about 70 kg·mm2and about 400 kg·mm2. More specifically, certain embodiments have a moment of inertia about the head CG x-axis201between about 140 kg·mm2and about 225 kg·mm2, between about 225 kg·mm2and about 310 kg·mm2or between about 310 kg·mm2and about 400 kg·mm2. In other examples, embodiments have a moment of inertia about a head CG x-axis of between about 400 kg·mm2and about 430 kg·mm2. In certain embodiments with two weight ports and two weights, the moment of inertia about the head CG x-axis201is between about 70 kg·mm2and about 430 kg·mm2. In specific embodiments with two weight ports and one weight, the moment of inertia about the head CG x-axis201is between about 140 kg·mm2and about 430 kg·mm2. Even more specifically, certain other embodiments have a moment of inertia about the head CG x-axis201between about 70 kg·mm2and about 140 kg·mm2, between about 140 kg·mm2and about 430 kg·mm2, between about 220 kg·mm2and about 280 kg·mm2, or between about 220 kg·mm2and about 360 kg·mm2. In specific embodiments with three weight ports and three weights or four weight ports and four weights, the moment of inertia about the head CG x-axis201is between about 180 kg·mm2and about 280 kg·mm2. In some embodiments of a golf club head of the present application having a thin wall sole or skirt, as described below, a moment of inertia about the golf club head CG x-axis201can be greater than about 150 kg·mm2. More specifically, the moment of inertia about the head CG x-axis201can be between about 150 kg·mm2and about 180 kg·mm2, between about 180 kg·mm2and about 200 kg·mm2or greater than about 200 kg·mm2. A golf club head of the present invention can be configured to have a first constraint defined as the moment of inertia about the golf club head CG x-axis201divided by the sum of the body mass and the total weight port mass. According to some embodiments, the first constraint is between about 800 mm2and about 4,000 mm2. In specific embodiments, the first constraint is between about 800 mm2and about 1,100 mm2, between about 1,100 mm2and about 1,600 mm2or between about 1,600 mm2and about 4,000 mm2. A golf club head of the present application can be configured to have a second constraint defined as the moment of inertia about the golf club head CG x-axis201multiplied by the total weight mass. According to some embodiments, the second constraint is between about 1.4 g2·mm2and about 40 g2·mm2. In certain embodiments, the second constraint is between about 1.4 g2·mm2and about 2.0 g2·mm2, between about 2.0 g2·mm2and about 10 g2·mm2or between about 10 g2·mm2and about 40 g2·mm2. 2. Moments of Inertia about CG Z-Axis In several embodiments, the golf club head of the present invention can have a moment of inertia about the golf club head CG z-axis203between about 200 kg·mm2and about 600 kg·mm2. More specifically, certain embodiments have a moment of inertia about the head CG z-axis203between about 250 kg·mm2and about 370 kg·mm2, between about 370 kg·mm2and about 480 kg·mm2or between about 480 kg·mm2and about 600 kg·mm2. In specific embodiments with two weight ports and one weight, the moment of inertia about the head CG z-axis203is between about 250 kg·mm2and about 600 kg·mm2. In specific embodiments with two weight ports and two weights, the moment of inertia about the head CG z-axis203is between about 200 kg·mm2and about 600 kg·mm2. Even more specifically, certain embodiments have a moment of inertia about the head CG z-axis203between about 200 kg·mm2and about 350 kg·mm2, between about 250 kg·mm2and 600 kg·mm2, between about 360 kg·mm2and about 450 kg·mm2or between about 360 kg·mm2and about 500 kg·mm2. In specific embodiments with three weight ports and three weights or four weight ports and four weights, the moment of inertia about the head CG z-axis203is between about 300 kg·mm2and about 450 kg·mm2. In some embodiments with a thin wall sole or skirt, a moment of inertia about a golf club head CG z-axis203can be greater than about 250 kg·mm2. More specifically, the moment of inertia about head CG z-axis203can be between about 250 kg·mm2and about 300 kg·mm2, between about 300 kg·mm2and about 350 kg·mm2, between about 350 kg·mm2and about 400 kg·mm2or greater than about 400 kg·mm2. A golf club head can be configured to have a third constraint defined as the moment of inertia about the golf club head CG z-axis203divided by the sum of the body mass and the total weight port mass. According to some embodiments, the third constraint is between about 1,500 mm2and about 6,000 mm2. In certain embodiments, the third constraint is between about 1,500 mm2and about 2,000 mm2, between about 2,000 mm2and about 3,000 mm2or between about 3,000 mm2and about 6,000 mm2. A golf club head can be configured to have a fourth constraint defined as the moment of inertia about the golf club head CG z-axis203multiplied by the total weight mass. According to some embodiments, the fourth constraint is between about 2.5 g2·mm2and about 72 g2·mm2. In certain embodiments, the fourth constraint is between about 2.5 g2·mm2and about 3.6 g2·mm2, between about 3.6 g2·mm2and about 18 g2·mm2or between about 18 g2·mm2and about 72 g2·mm2. D. Positioning of Weight Ports and Weights In some embodiments of the present application, the location, position or orientation of features of a golf club head, such as golf club head28, can be referenced in relation to fixed reference points, e.g., a golf club head origin, other feature locations or feature angular orientations. The location or position of a weight, such as weight24, is typically defined with respect to the location or position of the weight's center of gravity. Similarly, the location or position of a weight port is defined as the location or position of the weight port's volumetric centroid (i.e., the centroid of the cavity formed by a port where the port is “covered” by regular, imaginary surfaces as previously described with respect to club head volume and weight port volume). When a weight or weight port is used as a reference point from which a distance, i.e., a vectorial distance (defined as the length of a straight line extending from a reference or feature point to another reference or feature point) to another weight or weights port is determined, the reference point is typically the center of gravity of the weight or the volumetric centroid of the weight port. 1. Weight Coordinates The location of a weight on a golf club head can be approximated by its coordinates on the head origin coordinate system as described above in connection withFIGS.16and17. For example, in some embodiments, weights24can have origin x-axis172coordinates, origin y-axis174coordinates, and origin z-axis176coordinates on the coordinate system associated with golf club head origin170. In some embodiments of golf club head28having one weight24, the weight can have an origin x-axis coordinate between about −60 mm and about 60 mm. In specific embodiments, the weight can have an origin x-axis coordinate between about −20 mm and about 20 mm, between about −40 mm and about 20 mm, between about 20 mm and about 40 mm, between about −60 and about −40 mm, or between about 40 mm and about 60 mm. In some embodiments, a weight, such as weight24, can have a y-axis coordinate greater than about 0 mm. More specifically, in certain embodiments, the weight24has a y-axis coordinate between about 0 mm and about 20 mm, between about 20 mm and about 50 mm or greater than about 50 mm. In some embodiments, a weight, such as weight24, can have a z-axis coordinate between about −30 mm and about 20 mm. In specific embodiments, the weight can have an origin z-axis coordinate between about −20 mm and about −10 mm, between about 0 mm and about 20 mm, between about 5 mm and about 15 mm, or between about −30 mm and about −10 mm. In some embodiments including a first weight and a second weight, the first weight can have an origin x-axis coordinate between about −60 mm and about 0 mm and the second weight can have an origin x-axis coordinate between about 0 mm and about 60 mm. In certain embodiments, the first weight has an origin x-axis coordinate between about −52 mm and about −12 mm, between about −50 mm and about −10 mm, between about −42 mm and about −22 mm or between about −40 mm and about −20 mm. In certain embodiments, the second weight has an origin x-axis coordinate between about 10 mm and about 50 mm, between about 7 mm and about 42 mm, between about 12 mm and about 32 mm or between about 20 mm and about 40 mm. In some embodiments, the first and second weights can have respective y-axis coordinates between about 0 mm and about 130 mm. In certain embodiments, the first and second weights have respective y-axis coordinates between about 20 mm and about 40 mm, between about 20 mm and about 50 mm, between about 36 mm and about 76 mm or between about 46 mm and about 66 mm. In certain embodiments of the golf club head130having first, second and third weights131, the first weight can have an origin x-axis coordinate between about −47 mm and about −27 mm, the second weight can have an origin x-axis coordinate between about 22 mm and about 44 mm and the third weight can have an origin x-axis coordinate between about −30 mm and about 30 mm. In certain embodiments, the first and second weights can each have a y-axis coordinate between about 10 mm and about 30 mm, and the third weight can have a y-axis coordinate between about 63 mm and about 83 mm. In certain embodiments, the first weight and second weights can each have a z-axis coordinate between about −20 mm and about −10 mm, and the third weight can have a z-axis coordinate between about 0 mm and about 20 mm or between about −30 mm and about −10 mm. In certain embodiments of the golf club head28having first, second, third and fourth weights24, the first weight can have an origin x-axis coordinate between about −47 mm and about −27 mm, the second weight can have an origin x-axis coordinate between about 24 mm and about 44 mm, the third weight can have an origin x-axis coordinate between about −30 mm and about −10 mm and the fourth weight can have an origin x-axis coordinate between about 8 mm and about 28 mm. In certain embodiments, the first and second weights can each have a y-axis coordinate between about 10 mm and about 30 mm, and the third and fourth weights can each have a y-axis coordinate between about 63 mm and about 83 mm. In certain embodiments of the golf club head320having first, second, third and fourth weights, the first weight can have an origin x-axis coordinate between about −33 mm and about −27 mm, the second weight can have an origin x-axis coordinate between about 28 mm and about 36 mm, the third and fourth weights can have an origin x-axis coordinate between about 9 mm and about 13 mm. In certain embodiments, the first and second weights can each have a y-axis coordinate between about 14 mm and about 18 mm, and the third and fourth weights can each have a y-axis coordinate between about 98 mm and about 120 mm. In certain embodiments, the first weight can have an origin z-axis coordinate between about −18 mm and about −14 mm, the second weight can have an origin z-axis coordinate between about −16 mm and about −12 mm, the third weight can have an origin z-axis coordinate between about 8 mm and about 10 mm, and the fourth weight can have an origin z-axis coordinate between about −21 mm and about −10 mm. Weight location ranges for two additional sets of examples (range 1 and range 2, respectively) of a four weight embodiment are listed in Table 4. TABLE 4Weight Locations (mm)Origin AxisWeight 1Weight 2Weight 3Weight 4x, range 110.5 to 11.610.5 to 11.630.4 to 33.6−28.5 to −31.5y, range 1104 to 115104 to 11515.9 to 17.515.2 to 16.8z, range 1−18.1 to −208.6 to 9.5−13.3 to −14.7−15.2 to −16.8x, range 210.8 to 11.210.8 to 11.231.4 to 32.6−29.4 to −30.6y, range 2107 to 111107 to 11116.4 to 17.015.7 to 16.3z, range 2−18.6 to −19.48.8 to 9.2−13.7 to −14.3−15.7 to −16.3 2. Distance from Head Origin to Weights The location of a weight on a golf club head of the present application can be approximated by its distance away from a fixed point on the golf club head. For example, the positions of the weights24about the golf club head28can be described according to their distances away from the golf club head origin170. In some embodiments of the golf club head136having a first weight137or a first weight and a second weight137, distances from the head origin170to each weight can be between about 20 mm and 200 mm. In certain embodiments, the distances can be between about 20 mm and about 60 mm, between about 60 mm and about 100 mm, between about 100 mm and about 140 mm or between about 140 mm and about 200 mm. In some embodiments of the golf club head130having three weights131, including a first weight positioned proximate a toe portion of the golf club head, a second weight positioned proximate a heel portion of the golf club head and a third weight positioned proximate a rear portion of the golf club head, the distances between the head origin and the first and second weights, respectively, can be between about 20 mm and about 60 mm and the distance between the head origin and the third weight can be between about 40 mm and about 100 mm. More specifically, in certain embodiments, the distances between the head origin and the first and second weights, respectively, can be between about 30 mm and about 50 mm and the distance between the head origin and the third weight can be between about 60 mm and about 80 mm. In some embodiments of the golf club head28having four weights24, including a first weight positioned proximate a front toe portion of the golf club head, a second weight positioned proximate a front heel portion of the golf club head, a third weight positioned proximate a rear toe portion of the golf club head and a fourth weight positioned proximate a rear heel portion of the golf club head, the distances between the head origin and the first and second weights can be between about 20 mm and about 60 mm and the distances between the head origin and the third and fourth weights can be between about 40 mm and about 100 mm. More specifically, in certain embodiments, the distances between the head origin and the first and second weights can be between about 30 mm and about 50 mm and the distances between the head origin and the third and fourth weights can be between about 60 mm and about 80 mm. 3. Distance from Head Origin to Weight Ports The location of a weight port on a golf club head can be approximated by its distance away from a fixed point on the golf club head. For example, the positions of one or more weight ports about the golf club head28can be described according to their distances away from the golf club head origin170. In some embodiments of the golf club head136having first and second weight ports138, distances from the head origin170to each weight port can be between about 20 mm and 200 mm. In certain embodiments, the distances can be between about 20 mm and about 60 mm, between about 60 mm and about 100 mm, between about 100 mm and about 140 mm or between about 140 mm and about 200 mm. 4. Distance Between Weights and/or Weight Ports The location of a weight and/or a weight port about a golf club head of the present application can also be defined relative to its approximate distance away from other weights and/or weight ports. In some embodiments, a golf club head of the present application has only one weight and a first weight port and a second weight port. In such an embodiment, a distance between a first weight position, defined for a weight when installed in a first weight port, and a second weight position, defined for the weight when installed in a second weight port, is called a “separation distance.” In some embodiments, the separation distance is between about 5 mm and about 200 mm. In certain embodiments, the separation distance is between about 50 mm and about 100 mm, between about 100 mm and about 150 mm or between about 150 mm and about 200 mm. In some specific embodiments, the first weight port is positioned proximate a toe portion of the golf club head and the second weight port is positioned proximate a heel portion of the golf club head. In some embodiments of the golf club head136with two weights137and first and second weight ports138, the two weights include a first weight and a second weight. In some embodiments, the distance between the first and second weights137is between about 5 mm and about 200 mm. In certain embodiments, the distance between the first and second weights137is between about 5 mm and about 50 mm, between about 50 mm and about 100 mm, between about 100 mm and about 150 mm or between about 150 mm and about 200 mm. In some specific embodiments, the first weight is positioned proximate a toe portion of the golf club head and the second weight is positioned proximate a heel portion of the golf club head. In some embodiments of a golf club head having at least two weight ports, a distance between the first and second weight ports is between about 5 mm and about 200 mm. In more specific embodiments, the distance between the first and second weight ports is between about 5 mm and about 50 mm, between about 50 mm and about 100 mm, between about 100 mm and about 150 mm or between about 150 mm and about 200 mm. In some specific embodiments, the first weight port is positioned proximate a toe portion of the golf club head and the second weight port is positioned proximate a heel portion of the golf club head. In some embodiments of the golf club head130having first, second and third weights131, a distance between the first and second weights is between about 40 mm and about 100 mm, and a distance between the first and third weights, and the second and third weights, is between about 30 mm and about 90 mm. In certain embodiments, the distance between the first and second weights is between about 60 mm and about 80 mm, and the distance between the first and third weights, and the second and third weights, is between about 50 mm and about 70 mm. In some embodiments, the first weight is positioned proximate a toe portion of the golf club head, the second weight is positioned proximate a heel portion of the golf club head and the third weight is positioned proximate a rear portion of the golf club head. In some embodiments of the golf club head28having first, second, third and fourth weights24, a distance between the first and second weights, the first and fourth weights, and the second and third weights is between about 40 mm and about 100 mm; a distance between the third and fourth weights is between about 5 mm and about 80 mm; and a distance between the first and third weights and the second and fourth weights is about 30 mm to about 90 mm. In more specific embodiments, a distance between the first and second weights, the first and fourth weights, and the second and third weights is between about 60 mm and about 80 mm; a distance between the first and third weights and the second and fourth weights is between about 50 mm and about 70 mm; and a distance between the third and fourth weights is between about 5 mm and about 50 mm. In some specific embodiments, the first weight is positioned proximate a front toe portion of the golf club head, the second weight is positioned proximate a front heel portion of the golf club head, the third weight is positioned proximate a rear toe portion of the golf club head and the fourth weight is positioned proximate a rear heel portion of the golf club head. In other specific embodiments, the first weight is positioned proximate a front toe portion of the golf club head, the second weight is positioned proximate a front heel portion of the golf club head, the third weight is positioned proximate a high rear portion of the golf club head and the fourth weight is positioned proximate a low rear portion of the golf club head. 5. Weight Port Axis Angular Orientations The weight port radial axis can be defined as having a positive weight port radial axis portion extending from the exterior of the club head into the cavity. In some embodiments of a golf club head of the present application, an angle formed between the weight port radial axis and a golf club head impact axis is between about 10 degrees and about 80 degrees. The golf club head impact axis can be defined as the origin y-axis174in the negative direction. In some specific embodiments, the angle is between about 25 degrees and about 65 degrees. The angled orientation of the weight port radial axis with respect to the golf club head impact axis is desirable to reduce the axial load on the weights and their associated retaining mechanism when the club head impacts a ball. In some embodiments of a golf club head, an angle formed between the weight port radial axis and the origin z-axis in the positive direction is between about 10 degrees and about 80 degrees (i.e. generally downwards) or between about 100 degrees and about 170 degrees (i.e. generally upwards). For example, for weight ports formed in a high or upper portion of the club head body such as in the crown, an angle formed between the weight port radial axis and the origin z-axis in the positive direction is typically between about 10 degrees and about 80 degrees, while for weight ports formed in a lower portion of the club head body, an angle formed between the weight port radial axis and the origin z-axis in the positive direction is typically between about 100 degrees and about 170 degrees. A relative weight port radial axis angle can be formed between a first weight port radial axis of a first port and a second weight port radial axis of a second port. In some embodiments of a golf club head of the present application, the relative weight port radial axis angle can be between about 0 degrees and about 170 degrees. In some embodiments, the relative weight port radial axis angle is between about 0 degrees and about 135 degrees. In some embodiments, the first and second ports can have essentially the same weight port radial axis angles and a relative weight port radial axis angle can be approximately 0 degrees. In some of the embodiments, the first and second ports can be both located in a front portion of a golf club head or both located in a low rear portion of the golf club head. In some embodiments, the relative weight port radial axis angle is nonzero. In some of these embodiments, the first port can be located in a front portion of a golf club head and the second port can be located in a rear portion of a golf club head, or the first port can be located in a high rear portion of a golf club head and the second port can be located in a low rear portion of a golf club head. E. Distance from Head Origin to Head Center of Gravity The location of the CG of a club head can be defined by its spatial relationship to a fixed point on the golf club head. For example, as discussed above, the location of the golf club head CG can be described according to the spatial relationship between the CG and the golf club head origin. In some embodiments of a golf club head having one weight, the golf club head has a CG with a head origin x-axis coordinate between about −10 mm and about 10 mm and a head origin y-axis coordinate greater than about 15 mm or less than about 50 mm. In some embodiments, the CG has a head origin z-axis coordinate between about −6 mm and about 1 mm. In some embodiments of a golf club head having two weights, the golf club head has a CG with an origin x-axis coordinate between about −10 mm and about 10 mm or between about −4 mm and about 8 mm, and an origin y-axis coordinate greater than about 15 mm or between about 15 mm and about 50 mm. In some embodiments of a golf club head having three or four weights, the golf club head has a CG with an origin x-axis coordinate between about −3 mm and about 6 mm and an origin y-axis coordinate between about 20 mm and about 40 mm. In some embodiments of a golf club head having three or four weights, the CG has a head origin z-axis coordinate between about −6 mm and about 1 mm. In some embodiments of a golf club head having a thin sole or thin skirt construction, the golf club head has a CG with an origin x-axis coordinate between about −5 mm and about 5 mm, an origin y-axis coordinate greater than about 0 mm and an origin z-axis coordinate less than about 0 mm. In some embodiments of a golf club head having a weight in the crown or in a high rear portion of the golf club head body, the golf club head has a CG with an origin z-axis coordinate between about −6 mm and about 1 mm. In other embodiments of a golf club head having a weight in a high rear portion of the golf club head body, the golf club head has a CG with an origin z-axis coordinate between about −5 mm and about 0 mm. In other embodiments of a golf club head having three or four weights, the golf club head has a CG with an origin x-axis coordinate between about −3 mm and about 6 mm, an origin y-axis coordinate between about 20 mm and about 40 mm, and an origin z-axis coordinate between about −5 mm and about 0 mm. More particularly, in specific embodiments of a golf club head having specific configurations, the golf club head has a CG with coordinates approximated in Table 5. TABLE 5CGTwoThreeFourThin Sole/SkirtCoordinatesWeightsWeightsWeightsConstructionorigin x-axis−3 to 8−3 to 6−3 to 6−2 to 2coordinate (mm)−3 to 2−1 to 4−1 to 4−1 to 12 to 6−3 to 3−3 to 3−2 to 10 to 62 to 5−4 to 6−4 to 4−2 to 6origin y-axis15 to 2520 to 4020 to 4012 to 15coordinate (mm)25 to 3523 to 4023 to 4015 to 1835 to 5020 to 3720 to 37>1830 to 4020 to 3822 to 3831 to 3722 to 3820 to 30origin z-axis−5 to 0−5 to 0−5 to 0−5 to 0coordinate (mm)−6 to 1−6 to 1−6 to 1−6 to 1 F. Head Geometry and Weight Characteristics 1. Loft and Lie According to some embodiments of the present application, a golf club head has a loft angle between about 6 degrees and about 16 degrees or between about 13 degrees and about 30 degrees. In yet other embodiments, the golf club has a lie angle between about 55 degrees and about 65 degrees. 2. Coefficient of Restitution Generally, a coefficient of restitution (COR) of a golf club head is the measurement of the amount of energy transferred between a golf club face plate and a ball at impact. In a simplified form, the COR may be expressed as a percentage of the speed of a golf ball immediately after being struck by the club head divided by the speed of the club head upon impact with the golf ball, with the measurement of the golf ball speed and club head speed governed by United States Golf Association guidelines. In some embodiments of the present application, the golf club head has a COR greater than about 0.8. 3. Thin Wall Construction According to some embodiments of a golf club head of the present application, the golf club head has a thin wall construction. Among other advantages, thin wall construction facilitates the redistribution of material from one part of a club head to another part of the club head. Because the redistributed material has a certain mass, the material may be redistributed to locations in the golf club head to enhance performance parameters related to mass distribution, such as CG location and moment of inertia magnitude. Club head material that is capable of being redistributed without affecting the structural integrity of the club head is commonly called discretionary weight. In some embodiments of the present invention, thin wall construction enables discretionary weight to be removed from one or a combination of the striking plate, crown, skirt, or sole and redistributed in the form of weight ports and corresponding weights. Thin wall construction can include a thin sole construction, i.e., a sole with a thickness less than about 0.9 mm but greater than about 0.4 mm over at least about 50% of the sole surface area; and/or a thin skirt construction, i.e., a skirt with a thickness less than about 0.8 mm but greater than about 0.4 mm over at least about 50% of the skirt surface area; and/or a thin crown construction, i.e., a crown with a thickness less than about 0.8 mm but greater than about 0.4 mm over at least about 50% of the crown surface area. More specifically, in certain embodiments of a golf club having a thin sole construction and at least one weight and two weight ports, the sole, crown and skirt can have respective thicknesses over at least about 50% of their respective surfaces between about 0.4 mm and about 0.9 mm, between about 0.8 mm and about 0.9 mm, between about 0.7 mm and about 0.8 mm, between about 0.6 mm and about 0.7 mm, or less than about 0.6 mm. According to a specific embodiment of a golf club having a thin skirt construction, the thickness of the skirt over at least about 50% of the skirt surface area can be between about 0.4 mm and about 0.8 mm, between about 0.6 mm and about 0.7 mm or less than about 0.6 mm. 4. Face Plate Geometries A height and a width can be defined for the face plate of the golf club head. According to some embodiments and as shown inFIG.17, a face plate148has a height178measured from a lowermost point of the face plate to an uppermost point of the face plate, and a width180measured from a point on the face plate proximate the heel portion152to a point on the face plate proximate a toe portion154, when the golf club is ideally positioned at address. For example, in some embodiments of a fairway wood-type golf club head of the present application, the golf club head face plate has a height between about 32 mm and about 38 mm and a width between about 86 mm and about 92 mm. More specifically, a particular embodiment of a fairway wood-type golf club head has a face plate height between about 34 mm and about 36 mm and a width between about 88 mm and about 90 mm. In yet a more specific embodiment of a fairway wood-type golf club head, the face plate height is about 35 mm and the width is about 89 mm. In some embodiments of a driver type golf club head of the present application, the golf club head face plate has a height between about 53 mm and about 59 mm and a width between about 105 mm and about 111 mm. More specifically, a particular embodiment of a driver type golf club head has a face plate height between about 55 mm and about 57 mm and a width between about 107 mm and about 109 mm. In yet a more specific embodiment of a driver type golf club head, the face plate height is about 56 mm and the width is about 108 mm. According to some embodiments, a golf club head face plate can include a variable thickness faceplate. Varying the thickness of a faceplate may increase the size of a club head COR zone, commonly called the sweet spot of the golf club head, which, when striking a golf ball with the golf club head, allows a larger area of the face plate to deliver consistently high golf ball velocity and shot forgiveness. A variable thickness face plate182, according to one embodiment of a golf club head illustrated inFIGS.18and19, includes a generally circular protrusion184extending into the interior cavity towards the rear portion of the golf club head. When viewed in cross-section, as illustrated inFIG.18, protrusion184includes a portion with increasing thickness from an outer portion186of the face plate182to an intermediate portion187. The protrusion184further includes a portion with decreasing thickness from the intermediate portion187to an inner portion188positioned approximately at a center of the protrusion preferably proximate the golf club head origin. In some embodiments of a golf club head having a face plate with a protrusion, the maximum face plate thickness is greater than about 4.8 mm, and the minimum face plate thickness is less than about 2.3 mm. In certain embodiments, the maximum face plate thickness is between about 5 mm and about 5.4 mm and the minimum face plate thickness is between about 1.8 mm and about 2.2 mm. In yet more particular embodiments, the maximum face plate thickness is about 5.2 mm and the minimum face plate thickness is about 2 mm. In some embodiments of a golf club head having a face plate with a protrusion and a thin sole construction or a thin skirt construction, the maximum face plate thickness is greater than about 3.0 mm and the minimum face plate thickness is less than about 3.0 mm. In certain embodiments, the maximum face plate thickness is between about 3.0 mm and about 4.0 mm, between about 4.0 mm and about 5.0 mm, between about 5.0 mm and about 6.0 mm or greater than about 6.0 mm, and the minimum face plate thickness is between about 2.5 mm and about 3.0 mm, between about 2.0 mm and about 2.5 mm, between about 1.5 mm and about 2.0 mm or less than about 1.5 mm. For some embodiments of a golf club head of the present application, a ratio of the minimum face plate thickness to the maximum face plate thickness is less than about 0.4. In more specific embodiments, the ratio is between about 0.36 and about 0.39. In yet more certain embodiments, the ratio is about 0.38. For some embodiments of a fairway wood-type golf club head of the present application, an aspect ratio, (i.e., the ratio of the face plate height to the face plate width) is between about 0.35 and about 0.45. In more specific embodiments, the aspect ratio is between about 0.38 and about 0.42, or about 0.4. For some embodiments of a driver type golf club head of the present application, the aspect ratio is between about 0.45 and about 0.58. In more specific embodiments, the aspect ratio is between about 0.49 and about 0.54, or about 0.52. G. Mass Ratios/Constraints 1. Ratio of Total Weight Port Mass to Body Mass According to some embodiments of the golf club head136having two weight ports138and either one weight137or two weights137, a ratio of the total weight port mass to the body mass is between about 0.08 and about 2.0. According to some specific embodiments, the ratio can be between about 0.08 and about 0.1, between about 0.1 and about 0.17, between about 0.17 and about 0.24, between about 0.24 and about 0.3 or between about 0.3 and about 2.0. In some embodiments of the golf club head130having three weight ports132and three weights131, the ratio of the total weight port mass to the body mass is between about 0.015 and about 0.82. In specific embodiments, the ratio is between about 0.015 and about 0.22, between about 0.22 and about 0.42, between about 0.42 and about 0.62 or between about 0.62 and about 0.82. In some embodiments of the golf club head28having four weight ports96,98,102,104and four weights24, the ratio of the total weight port mass to the body mass is between about 0.019 and about 0.3. In specific embodiments, the ratio is between about 0.019 and about 0.09, between about 0.09 and about 0.16, between about 0.16 and about 0.23 or between about 0.23 and about 0.3. 2. Ratio of Total Weight Port Mass Plus Total Weight Mass to Body Mass According to some embodiments of the golf club head136having two weight ports138and one weight137or two weights137, a ratio of the total weight port mass plus the total weight mass to the body mass is between about 0.06 and about 3.0. More specifically, according to certain embodiments, the ratio can be between about 0.06 and about 0.3, between about 0.3 and about 0.6, between about 0.6 and about 0.9, between about 0.9 and about 1.2 or between about 1.2 and about 3.0. In some embodiments of the golf club head130having three weight ports132and three weights131, the ratio of the total weight port mass plus the total weight mass to the body mass is between about 0.044 and about 3.1. In specific embodiments, the ratio is between about 0.044 and about 0.8, between about 0.8 and about 1.6, between about 1.6 and about 2.3 or between about 2.3 and about 3.1. In some embodiments of the golf club head28having four weight ports96,98,102,104and four weights24, the ratio of the total weight port mass plus the total weight mass to the body mass is between about 0.049 and about 4.6. In specific embodiments, the ratio is between about 0.049 and about 1.2, between about 1.2 and about 2.3, between about 2.3 and about 3.5 or between about 3.5 and about 4.6. 3. Product of Total Weight Mass and Separation Distance In some embodiments of the golf club head136having two weight ports138and one weight137, the weight mass multiplied by the separation distance of the weight is between about 50 g·mm and about 15,000 g·mm. More specifically, in certain embodiments, the weight mass multiplied by the weight separation distance is between about 50 g·mm and about 500 g·mm, between about 500 g·mm and about 2,000 g·mm, between about 2,000 g·mm and about 5,000 g·mm or between about 5,000 g·mm and about 15,000 g·mm. 4. Product of Maximum Weight Mass Minus Minimum Weight Mass and Distance Between Maximum and Minimum Weights In some embodiments of a golf club head of the present application having two, three or four weights, a maximum weight mass minus a minimum weight mass multiplied by the distance between the maximum weight and the minimum weight is between about 950 g·mm and about 14,250 g·mm. More specifically, in certain embodiments, the weight mass multiplied by the weight separation distance is between about 950 g·mm and about 4,235 g·mm, between about 4,235 g·mm and about 7,600 g·mm, between about 7,600 g·mm and about 10,925 g·mm or between about 10,925 g·mm and about 14,250 g·mm. 5. Ratio of Total Weight Mass to Sum of Body Mass and Total Weight Port Mass According to some embodiments of a golf club head having at least one weight and at least two weight ports, a ratio of the total weight mass to the sum of the body mass plus the total weight port mass is between about 0.05 and about 1.25. In specific embodiments, the ratio is between about 0.05 and about 0.35, between about 0.35 and about 0.65, between about 0.65 and about 0.95 or between about 0.95 and about 1.25. H. Sole, Crown and Skirt Areal Weights According to some embodiments of a golf club head of the present application, an areal weight, i.e., material density multiplied by the material thickness, of the golf club head sole, crown and skirt, respectively, is less than about 0.45 g/cm2over at least about 50% of the surface area of the respective sole, crown and skirt. In some specific embodiments, the areal weight is between about 0.15 g/cm2and about 0.25 g/cm2, between about 0.25 g/cm2and about 0.35 g/cm2or between about 0.35 g/cm2and about 0.45 g/cm2. According to some embodiments of a golf club having a skirt thickness less than about 0.8 mm, the head skirt areal weight is less than about 0.41 g/cm2over at least about 50% of the surface area of the skirt. In specific embodiments, the skirt areal weight is between about 0.15 g/cm2and about 0.24 g/cm2, between about 0.24 g/cm2and about 0.33 g/cm2or between about 0.33 g/cm2and about 0.41 g/cm2. I. EXAMPLES 1. Example A According to one embodiment, a golf club head has two ports and at least one weight. The weight has a head origin x-axis coordinate between about −20 mm and about 20 mm and a mass between about 5 grams and about 50 grams. The golf club head has a volume between about 180 cm3and about 600 cm3, and a CG with a head origin y-axis coordinate greater than or equal to about 15 mm. In a specific embodiment, the weight has a head origin y-axis coordinate between about 0 mm and about 20 mm, between about 20 mm and about 50 mm, or greater than 50 mm. In a specific embodiment, the golf club head has a CG with a head origin x-axis coordinate between about −10 mm and about 10 mm and a y-axis coordinate less than or equal to about 50 mm. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 140 kg·mm2and about 400 kg·mm2, and a moment of inertia about the head CG z-axis between about 250 kg·mm2and about 600 kg·mm2. 2. Example B According to another embodiment, a golf club head has first and second ports and corresponding first and second weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −60 mm and about 0 mm and a mass between about 1 gram and about 100 grams. The second weight has a head origin x-axis coordinate between about 0 mm and about 60 mm and a mass between about 1 gram and about 100 grams. The golf club head has a volume between about 180 cm3and about 600 cm3, and a CG with a head origin y-axis coordinate greater than or equal to about 15 mm. In a specific embodiment, the first and second weights each have a head origin y-axis coordinate between about 0 mm and about 130 mm. In a specific embodiment, the golf club head has a CG with a head origin x-axis coordinate between about −10 mm and about 10 mm and a y-axis coordinate between about 15 mm to about 25 mm, or between about 25 mm to about 35 mm, or between about 35 mm to about 50 mm. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 140 kg·mm2and about 400 kg·mm2, a moment of inertia about the head CG z-axis between about 250 kg·mm2and about 600 kg·mm2, and a head volume greater than or equal to 250 cm3. 3. Example C According to another embodiment, a golf club head has two ports and at least one weight. The weight has a head origin x-axis coordinate between about −40 mm and about −20 mm or between about 20 mm and about 40 mm, and a mass between about 5 grams and about 50 grams. The golf club head has a volume between about 180 cm3and about 600 cm3, and a CG with a head origin y-axis coordinate greater than or equal to about 15 mm. In a specific embodiment, the weight has a head origin y-axis coordinate between about 0 mm and about 20 mm, between about 20 mm and about 50 mm, or greater than 50 mm. In a specific embodiment, the golf club head has a CG with a head origin x-axis coordinate between about −10 mm and about 10 mm and a y-axis coordinate less than or equal to about 50 mm. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 140 kg·mm2and about 400 kg·mm2, and a moment of inertia about the head CG z-axis between about 250 kg·mm2and about 600 kg·mm2. 4. Example D According to another embodiment, a golf club head has two ports and at least one weight. The weight has a head origin x-axis coordinate between about −60 mm and about −40 mm or between about 40 mm and about 60 mm, and a mass between about 5 grams and about 50 grams. The golf club head has a volume between about 180 cm3and about 600 cm3, and a CG with a head origin y-axis coordinate greater than or equal to about 15 mm. In a specific embodiment, the weight has a y-axis coordinate between about 0 mm and about 20 mm, between about 20 mm and about 50 mm, or greater than 50 mm. In a specific embodiment, the golf club head has a CG with a head origin x-axis coordinate between about −10 mm and about 10 mm and a y-axis coordinate less than or equal to about 50 mm. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 140 kg·mm2and about 400 kg·mm2, and a moment of inertia about the head CG z-axis between about 250 kg·mm2and about 600 kg·mm2. 5. Example E According to another embodiment, a golf club head has first and second ports and corresponding first and second weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −52 mm and about −12 mm, a head origin y-axis coordinate between about 36 mm and about 76 mm, and a mass between about 6 grams and about 18 grams. The second weight has a head origin x-axis coordinate between about 10 mm and about 50 mm, a head origin y-axis coordinate between about 36 mm and about 76 mm, and a mass between about 1 gram and about 3 grams. The golf club head has a CG with a head origin x-axis coordinate between about −3 mm and about 2 mm and a head origin y-axis coordinate between about 30 mm and about 40 mm. In a specific embodiment, the golf club head has a volume between about 400 cm3and about 500 cm3, and the sum of the body mass and the total port mass is between about 180 grams and about 215 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 220 kg·mm2and about 360 kg·mm2and a moment of inertia about the head CG z-axis between about 360 kg·mm2and about 500 kg·mm2. 6. Example F According to another embodiment, a golf club head has first and second ports and corresponding first and second weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −52 mm and about −12 mm, a head origin y-axis coordinate between about 36 mm and about 76 mm, and a mass between about 1 gram and about 3 grams. The second weight has a head origin x-axis coordinate between about 10 mm and about 50 mm, a head origin y-axis coordinate between about 36 mm and about 76 mm, and a mass between about 6 gram and about 18 grams. The golf club head has a CG with a head origin x-axis coordinate between about 2 mm and about 6 mm and a head origin y-axis coordinate between about 30 mm and about 40 mm. In a specific embodiment, the golf club head has a volume between about 400 cm3and about 500 cm3, and the sum of the body mass and the total port mass is between about 180 grams and about 215 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 220 kg·mm2and about 360 kg·mm2and a moment of inertia about the head CG z-axis between about 360 kg·mm2and about 500 kg·mm2. 7. Example G According to another embodiment, a golf club head has first and second ports and corresponding first and second weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −42 mm and about −22 mm, a head origin y-axis coordinate between about 46 mm and about 66 mm, and a mass between about 6 grams and about 18 grams. The second weight has a head origin x-axis coordinate between about 20 mm and about 40 mm, a head origin y-axis coordinate between about 46 mm and about 66 mm, and a mass between about 1 gram and about 3 grams. The golf club head has a CG with a head origin x-axis coordinate between about −2 mm and about 1 mm and a head origin y-axis coordinate between about 31 mm and about 37 mm. In a specific embodiment, the golf club head has a volume between about 440 cm3and about 460 cm3, and the sum of the body mass and the total port mass is between about 180 grams and about 215 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 220 kg·mm2and about 280 kg·mm2and a moment of inertia about the head CG z-axis between about 360 kg·mm2and about 450 kg·mm2. 8. Example H According to another embodiment, a golf club head has first and second ports and corresponding first and second weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −42 mm and about −22 mm, a head origin y-axis coordinate between about 46 mm and about 66 mm, and a mass between about 1 gram and about 3 grams. The second weight has a head origin x-axis coordinate between about 20 mm and about 40 mm, a head origin y-axis coordinate between about 46 mm and about 66 mm, and a mass between about 6 grams and about 18 grams. The golf club head has a CG with a head origin x-axis coordinate between about 2 mm and about 5 mm and a head origin y-axis coordinate between about 31 mm and about 37 mm. In a specific embodiment, the golf club head has a volume between about 440 cm3and about 460 cm3, and the sum of the body mass and the total port mass is between about 180 grams and about 215 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 220 kg·mm2and about 280 kg·mm2and a moment of inertia about the head CG z-axis between about 360 kg·mm2and about 450 kg·mm2. 9. Example I According to another embodiment, a golf club head has first and second ports and corresponding first and second weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −50 mm and about −10 mm, a head origin y-axis coordinate between about 20 mm and about 50 mm, and a mass between about 6 grams and about 18 grams. The second weight has a head origin x-axis coordinate between about 7 mm and about 42 mm, a head origin y-axis coordinate between about 20 mm and about 50 mm, and a mass between about 1 gram and about 3 grams. The golf club head has a CG with a head origin x-axis coordinate between about −4 mm and about 4 mm and a head origin y-axis coordinate between about 20 mm and about 30 mm. In a specific embodiment, the golf club head has a volume between about 110 cm3and about 210 cm3, a loft between about 13 degrees and about 30 degrees, and the sum of the body mass and the total port mass is between about 198 grams and about 222 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 70 kg·mm2and about 140 kg·mm2and a moment of inertia about the head CG z-axis between about 200 kg·mm2and about 350 kg·mm2. 10. Example J According to another embodiment, a golf club head has first and second ports and corresponding first and second weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −50 mm and about −10 mm, a head origin y-axis coordinate between about 20 mm and about 50 mm, and a mass between about 1 gram and about 3 grams. The second weight has a head origin x-axis coordinate between about 7 mm and about 42 mm, a head origin y-axis coordinate between about 20 mm and about 50 mm, and a mass between about 6 grams and about 18 grams. The golf club head has a CG with a head origin x-axis coordinate between about −2 mm and about 6 mm and a head origin y-axis coordinate between about 20 mm and about 30 mm. In a specific embodiment, the golf club head has a volume between about 110 cm3and about 210 cm3, a loft between about 13 degrees and about 30 degrees, and the sum of the body mass and the total port mass is between about 198 grams and about 222 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 70 kg·mm2and about 140 kg·mm2and a moment of inertia about the head CG z-axis between about 200 kg·mm2and about 350 kg·mm2. 11. Example K According to another embodiment, a golf club head has first and second ports and corresponding first and second weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −40 mm and about −20 mm, a head origin y-axis coordinate between about 20 mm and about 40 mm, and a mass between about 6 grams and about 18 grams. The second weight has a head origin x-axis coordinate between about 12 mm and about 32 mm, a head origin y-axis coordinate between about 20 mm and about 40 mm, and a mass between about 1 gram and about 3 grams. The golf club head has a CG with a head origin x-axis coordinate between about −4 mm and about 4 mm and a head origin y-axis coordinate between about 20 mm and about 30 mm. In a specific embodiment, the golf club head has a volume between about 110 cm3and about 210 cm3, a loft between about 13 degrees and about 30 degrees, and the sum of the body mass and the total port mass is between about 198 grams and about 222 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 70 kg·mm2and about 140 kg·mm2and a moment of inertia about the head CG z-axis between about 200 kg·mm2and about 350 kg·mm2. 12. Example L According to another embodiment, a golf club head has first and second ports and corresponding first and second weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −40 mm and about −20 mm, a head origin y-axis coordinate between about 20 mm and about 40 mm, and a mass between about 1 gram and about 3 grams. The second weight has a head origin x-axis coordinate between about 12 mm and about 32 mm, a head origin y-axis coordinate between about 20 mm and about 40 mm, and a mass between about 6 grams and about 18 grams. The golf club head has a CG with a head origin x-axis coordinate between about −2 mm and about 6 mm and a head origin y-axis coordinate between about 20 mm and about 30 mm. In a specific embodiment, the golf club head has a volume between about 110 cm3and about 210 cm3, a loft between about 13 degrees and about 30 degrees, and the sum of the body mass and the total port mass is between about 198 grams and about 222 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 70 kg·mm2and about 140 kg·mm2and a moment of inertia about the head CG z-axis between about 200 kg·mm2and about 350 kg·mm2. 13. Example M According to another embodiment, a golf club head has first, second, and third ports and corresponding first, second, and third weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −47 mm and about −27 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 1 gram and about 3 grams. The second weight has a head origin x-axis coordinate between about −30 mm and about −10 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 6 grams and about 18 grams. The third weight has a head origin x-axis coordinate between about 24 mm and about 44 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 1 gram and about 3 grams. The golf club head has a CG with a head origin x-axis coordinate between about −1 mm and about 4 mm and a head origin y-axis coordinate between about 23 mm and about 40 mm. In a specific embodiment, the golf club head has a volume between about 360 cm3and about 460 cm3and the sum of the body mass and the total port mass is between about 191 grams and about 211 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 180 kg·mm2and about 280 kg·mm2and a moment of inertia about the head CG z-axis between about 300 kg·mm2and about 450 kg·mm2. 14. Example N According to another embodiment, a golf club head has first, second, and third ports and corresponding first, second, and third weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −47 mm and about −27 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 6 grams and about 18 grams. The second weight has a head origin x-axis coordinate between about −30 mm and about −10 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 1 gram and about 3 grams. The third weight has a head origin x-axis coordinate between about 24 mm and about 44 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 6 grams and about 18 grams. The golf club head has a CG with a head origin x-axis coordinate between about −1 mm and about 4 mm and a head origin y-axis coordinate between about 20 mm and about 37 mm. In a specific embodiment, the golf club head has a volume between about 360 cm3and about 460 cm3and the sum of the body mass and the total port mass is between about 191 grams and about 211 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 180 kg·mm2and about 280 kg·mm2and a moment of inertia about the head CG z-axis between about 300 kg·mm2and about 450 kg·mm2. 15. Example O According to another embodiment, a golf club head has first, second, and third ports and corresponding first, second, and third weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −47 mm and about −27 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 6 grams and about 18 grams. The second weight has a head origin x-axis coordinate between about −30 mm and about −10 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 1 gram and about 3 grams. The third weight has a head origin x-axis coordinate between about 24 mm and about 44 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 1 gram and about 3 grams. The golf club head has a CG with a head origin x-axis coordinate between about −3 mm and about 3 mm and a head origin y-axis coordinate between about 20 mm and about 38 mm. In a specific embodiment, the golf club head has a volume between about 360 cm3and about 460 cm3and the sum of the body mass and the total port mass is between about 191 grams and about 211 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 180 kg·mm2and about 280 kg·mm2and a moment of inertia about the head CG z-axis between about 300 kg·mm2and about 450 kg·mm2. 16. Example P According to another embodiment, a golf club head has first, second, and third ports and corresponding first, second, and third weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −47 mm and about −27 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 1 gram and about 3 grams. The second weight has a head origin x-axis coordinate between about −30 mm and about −10 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 6 grams and about 18 grams. The third weight has a head origin x-axis coordinate between about 24 mm and about 44 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 6 grams and about 18 grams. The golf club head has a CG with a head origin x-axis coordinate between about 0 mm and about 6 mm and a head origin y-axis coordinate between about 22 mm and about 38 mm. In a specific embodiment, the golf club head has a volume between about 360 cm3and about 460 cm3and the sum of the body mass and the total port mass is between about 191 grams and about 211 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 180 kg·mm2and about 280 kg·mm2and a moment of inertia about the head CG z-axis between about 300 kg·mm2and about 450 kg·mm2. 17. Example Q According to another embodiment, a golf club head has first, second, and third ports and corresponding first, second, and third weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −47 mm and about −27 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 1 gram and about 3 grams. The second weight has a head origin x-axis coordinate between about −30 mm and about −10 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 1 gram and about 3 grams. The third weight has a head origin x-axis coordinate between about 24 mm and about 44 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 6 grams and about 18 grams. The golf club head has a CG with a head origin x-axis coordinate between about 0 mm and about 6 mm and a head origin y-axis coordinate between about 20 mm and about 38 mm. In a specific embodiment, the golf club head has a volume between about 360 cm3and about 460 cm3and the sum of the body mass and the total port mass is between about 191 grams and about 211 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 180 kg·mm2and about 280 kg·mm2and a moment of inertia about the head CG z-axis between about 300 kg·mm2and about 450 kg·mm2. 18. Example R According to another embodiment, a golf club head has first, second, and third ports and corresponding first, second, and third weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −47 mm and about −27 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 6 grams and about 18 grams. The second weight has a head origin x-axis coordinate between about −30 mm and about −10 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 6 grams and about 18 grams. The third weight has a head origin x-axis coordinate between about 24 mm and about 44 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 1 gram and about 3 grams. The golf club head has a CG with a head origin x-axis coordinate between about −3 mm and about 3 mm and a head origin y-axis coordinate between about 22 mm and about 38 mm. In a specific embodiment, the golf club head has a volume between about 360 cm3and about 460 cm3and the sum of the body mass and the total port mass is between about 191 grams and about 211 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 180 kg·mm2and about 280 kg·mm2and a moment of inertia about the head CG z-axis between about 300 kg·mm2and about 450 kg·mm2. 19. Example S According to another embodiment, a golf club head has first, second, third, and fourth ports and corresponding first, second, third, and fourth weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −47 mm and about −27 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 1 gram and about 3 grams. The second weight has a head origin x-axis coordinate between about −30 mm and about −10 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 6 grams and about 18 grams. The third weight has a head origin x-axis coordinate between about 8 mm and about 28 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 6 grams and about 18 grams. The fourth weight has a head origin x-axis coordinate between about 24 mm and about 44 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 1 gram and about 3 grams. The golf club head has a CG with a head origin x-axis coordinate between about −1 mm and about 4 mm and a head origin y-axis coordinate between about 23 mm and about 40 mm. In a specific embodiment, the golf club head has a volume between about 360 cm3and about 460 cm3and the sum of the body mass and the total port mass is between about 191 grams and about 211 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 180 kg·mm2and about 280 kg·mm2and a moment of inertia about the head CG z-axis between about 300 kg·mm2and about 450 kg·mm2. 20. Example T According to another embodiment, a golf club head has first, second, third, and fourth ports and corresponding first, second, third, and fourth weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −47 mm and about −27 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 6 grams and about 18 grams. The second weight has a head origin x-axis coordinate between about −30 mm and about −10 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 1 gram and about 3 grams. The third weight has a head origin x-axis coordinate between about 8 mm and about 28 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 1 gram and about 3 grams. The fourth weight has a head origin x-axis coordinate between about 24 mm and about 44 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 6 grams and about 18 grams. The golf club head has a CG with a head origin x-axis coordinate between about −1 mm and about 4 mm and a head origin y-axis coordinate between about 20 mm and about 37 mm. In a specific embodiment, the golf club head has a volume between about 360 cm3and about 460 cm3and the sum of the body mass and the total port mass is between about 191 grams and about 211 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 180 kg·mm2and about 280 kg·mm2and a moment of inertia about the head CG z-axis between about 300 kg·mm2and about 450 kg·mm2. 21. Example U According to another embodiment, a golf club head has first, second, third, and fourth ports and corresponding first, second, third, and fourth weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −47 mm and about −27 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 6 grams and about 18 grams. The second weight has a head origin x-axis coordinate between about −30 mm and about −10 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 6 grams and about 18 grams. The third weight has a head origin x-axis coordinate between about 8 mm and about 28 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 1 gram and about 3 grams. The fourth weight has a head origin x-axis coordinate between about 24 mm and about 44 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 1 gram and about 3 grams. The golf club head has a CG with a head origin x-axis coordinate between about −3 mm and about 3 mm and a head origin y-axis coordinate between about 22 mm and about 38 mm. In a specific embodiment, the golf club head has a volume between about 360 cm3and about 460 cm3and the sum of the body mass and the total port mass is between about 191 grams and about 211 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 180 kg·mm2and about 280 kg·mm2and a moment of inertia about the head CG z-axis between about 300 kg·mm2and about 450 kg·mm2. 22. Example V According to another embodiment, a golf club head has first, second, third, and fourth ports and corresponding first, second, third, and fourth weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −47 mm and about −27 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 1 gram and about 3 grams. The second weight has a head origin x-axis coordinate between about −30 mm and about −10 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 1 gram and about 3 grams. The third weight has a head origin x-axis coordinate between about 8 mm and about 28 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 6 grams and about 18 grams. The fourth weight has a head origin x-axis coordinate between about 24 mm and about 44 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 6 grams and about 18 grams. The golf club head has a CG with a head origin x-axis coordinate between about 0 mm and about 6 mm and a head origin y-axis coordinate between about 22 mm and about 38 mm. In a specific embodiment, the golf club head has a volume between about 360 cm3and about 460 cm3and the sum of the body mass and the total port mass is between about 191 grams and about 211 grams. In a more specific embodiment, the golf club head has a moment of inertia about the head CG x-axis between about 180 kg·mm2and about 280 kg·mm2and a moment of inertia about the head CG z-axis between about 300 kg·mm2and about 450 kg·mm2. 23. Example W According to another embodiment, the sole, skirt, crown, and faceplate of a golf club head are each formed from a titanium alloy. The sole has a thickness less than about 0.9 mm but greater than about 0.4 mm over at least 50% of the sole surface area; the skirt has a thickness less than about 0.8 mm but greater than 0.4 mm over at least 50% of the skirt surface area; and the crown has a thickness less than about 0.8 mm but greater than about 0.4 mm over at least 50% of the crown surface area. The areal weight of the sole, crown, and skirt, respectively, is less than about 0.45 g/cm2over at least 50% of the surface area of the respective sole, crown and skirt. The golf club head has first, second, third, and fourth ports and corresponding first, second, third, and fourth weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −47 mm and about −27 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 1 grams and about 18 grams. The second weight has a head origin x-axis coordinate between about −30 mm and about −10 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 1 grams and about 18 grams. The third weight has a head origin x-axis coordinate between about 8 mm and about 28 mm, a head origin y-axis coordinate between about 63 mm and about 83 mm, and a mass between about 1 gram and about 18 grams. The fourth weight has a head origin x-axis coordinate between about 24 mm and about 44 mm, a head origin y-axis coordinate between about 10 mm and about 30 mm, and a mass between about 1 gram and about 18 grams. The golf club head has a CG with a head origin x-axis coordinate between about −3 mm and about 6 mm and a head origin y-axis coordinate between about 20 mm and about 40 mm. The golf club head has a volume between about 360 cm3and about 460 cm3and the sum of the body mass and the total port mass is between about 191 grams and about 211 grams. The golf club head has a moment of inertia about the head CG x-axis between about 180 kg·mm2and about 280 kg·mm2and a moment of inertia about the head CG z-axis between about 300 kg·mm2and about 450 kg·mm2. The ratio of the golf club head's total weight port volume to the head volume is between about 0.001 and about 0.05, and the angle formed between the weight ports' radial axes and a golf club head impact axis is between about 10 degrees and about 80 degrees. The golf club head has a loft angle between about 6 degrees and about 16 degrees, a lie angle between about 55 degrees and about 65 degrees, and a coefficient of restitution greater than 0.8. The ratio of the golf club head's total weight port mass to the body mass is between about 0.019 and about 0.3, and a maximum weight mass minus a minimum weight mass multiplied by the distance between the maximum weight and the minimum weight is between about 950 g·mm and about 14,250 g·mm. Additionally, a ratio of the golf club head's total weight mass to the sum of the body mass plus the total weight port mass is between about 0.05 and about 1.25. 24. Preferred Embodiment According to a preferred embodiment, the sole, skirt, crown, and faceplate of a golf club head are each formed from a titanium alloy. The sole has a thickness less than about 0.9 mm but greater than about 0.4 mm over at least 50% of the sole surface area; the skirt has a thickness less than about 0.8 mm but greater than 0.4 mm over at least 50% of the skirt surface area; and the crown has a thickness less than about 0.8 mm but greater than about 0.4 mm over at least 50% of the crown surface area. The areal weight of the sole, crown, and skirt, respectively, is less than about 0.45 g/cm2over at least 50% of the surface area of the respective sole, crown and skirt. The golf club head has first, second, third, and fourth ports and corresponding first, second, third, and fourth weights disposed in the ports. The first weight has a head origin x-axis coordinate between about −33 mm and about −27 mm, a head origin y-axis coordinate between about 14 mm and about 18 mm, a head origin z-axis coordinate between about −18 mm and about −14 mm, and a mass between about 1 gram and about 18 grams. The second weight has a head origin x-axis coordinate between about 28 mm and about 36 mm, a head origin y-axis coordinate between about 14 mm and about 18 mm, a head origin z-axis coordinate between about −12 mm and about −16 mm, and a mass between about 1 gram and about 18 grams. The third weight has a head origin x-axis coordinate between about 9 mm and about 13 mm, a head origin y-axis coordinate between about 98 mm and about 120 mm, a head origin z-axis coordinate between about 8 mm and about 10 mm, and a mass between about 1 gram and about 18 grams. The fourth weight has a head origin x-axis coordinate between about 9 mm and about 13 mm, a head origin y-axis coordinate between about 98 mm and about 120 mm, a head origin z-axis coordinate between about −21 mm and about −17 mm, and a mass between about 1 gram and about 18 grams. The golf club head has a CG with a head origin x-axis coordinate between about −3 mm and about 6 mm, a head origin y-axis coordinate between about 20 mm and about 40 mm, and a head origin z-axis coordinate between about −6 mm and about 1 mm. The golf club head has a volume between about 360 cm3and about 460 cm3and the sum of the body mass and the total port mass is between about 191 grams and about 211 grams. The golf club head has a moment of inertia about the head CG x-axis between about 180 kg·mm2and about 430 kg·mm2and a moment of inertia about the head CG z-axis between about 300 kg·mm2and about 560 kg·mm2. The ratio of the golf club head's total weight port volume to the head volume is between about 0.001 and about 0.05, and the angle formed between the weight ports' radial axes and a golf club head impact axis is between about 10 degrees and about 80 degrees. The golf club head has a loft angle between about 6 degrees and about 16 degrees, a lie angle between about 55 degrees and about 65 degrees, and a coefficient of restitution greater than 0.8. The ratio of the golf club head's total weight port mass to the body mass is between about 0.019 and about 0.3, and a maximum weight mass minus a minimum weight mass multiplied by the distance between the maximum weight and the minimum weight is between about 950 g·mm and about 14,250 g·mm. Additionally, a ratio of the golf club head's total weight mass to the sum of the body mass plus the total weight port mass is between about 0.05 and about 1.25. Various other designs of club heads and weights may be used, such as those disclosed in Applicant's U.S. Pat. No. 6,773,360 or those disclosed in other related applications. Furthermore, other club head designs known in the art can be adapted to take advantage of features of the present invention. In some disclosed examples, four weight ports are provided, but in other examples, one, two, three, four, or more weight ports can be provided and weight assemblies, weight screws, or other weights can be selected for use in these weight ports. For example, a club head can be provided with weight ports situated at a club toe and a club heel, respectively, and a third weight port situated at or near a club head crown. This weight port at the crown and the associated weights can be configured to adjust a vertical and horizontal location of a club head center of gravity. In some disclosed examples, vertical adjustment of club head center of gravity permits selection, control, or compensation of “dynamic loft.” Dynamic loft is essentially the difference between the effective loft at impact and the static loft angle at address. Dynamic loft can result from, for example, distortions in a club shaft produced by a golfer's swing. Deliberate vertical displacement of the club head center of gravity can result in striking face impact locations that tend to be vertically displaced from a horizontal plane containing a club head center of gravity so that a club head tends to rotate about the club head center-of-gravity (CG) x-axis. Such club head rotations about the CG x-axis tend to change dynamic loft and to produce corresponding vertical ball spins, such as varying degrees of backspin. This induced vertical spin is produced in a manner similar to the horizontal or side spin that results from the so-called “gear effect” produced by horizontal off-center hits. For example, moving a club head center of gravity vertically tends to change the amount of backspin on the launched ball. When a club head center of gravity is located low in a club head, a golf ball tends to impact the head above the center of gravity resulting in a backward or upward rotation of the club head, thereby reducing backspin. Such head rotation also tends to increase dynamic loft by launching the ball at a higher angle than a resting loft angle. When a club head center of gravity is located high in the club head, a golf ball tends to impact the head below the center of gravity, resulting in a downward or forward rotation of the club head. Such rotation tends to increase backspin via the gear effect and to reduce dynamic loft. Moving a club head center of gravity back from the face of the club head tends to increase the gear effect in the vertical and horizontal directions. Both spin and loft can be associated with ball trajectory and can be adjusted through movement of a club head center of gravity. Through selective vertical and horizontal displacements of a club head center of gravity, ball spin and ball launch angle can be selected independently, and clubs providing dynamic loft adjustments permit players to more fully customize shot characteristics. For example, spin and launch angle can be decoupled when a club head center of gravity is adjusted simultaneously in horizontal and vertical directions. In some embodiments, adjusting a club head center of gravity to a position in the back of the club head increases dynamic loft. Such an effect can be compensated by also moving the center of gravity upwards, which decreases the launch angle. For representative club head having a volume of 407 cm3and 21 g of movable weight, about 5 mm of backward (from the face) CG displacement is associated with a launch angle increase of about 0.8 degrees, while launch angle is decreased by about 0.2 degrees for each 1 mm of vertically upwards CG displacement. Thus, approximately 1.25 mm of vertical CG movement coupled with approximately 1.56 mm of horizontal center of gravity movement results in an increase in backspin accompanied by essentially no change in launch angle. In the disclosed embodiments, three of four weight ports are provided. In one example, three weight ports are arranged in a club sole so as to define a generally isosceles triangle and a fourth weight port is located in the crown. In a typical arrangement with about 21 g of movable weight for distribution in the weight ports, front-to-back CG movement is about 33.5 mm to about 41.5 mm from an approximate center of the face plate. Toe-to-heel CG movement can be about 0.2 mm to about 5.1 mm with respect of face center, and the CG can be displaced from about −0.9 mm below to about 1.7 mm above the face center. Having illustrated and described the principles of the disclosed embodiments, it will be apparent to those skilled in the art that the embodiments can be modified in arrangement and detail without departing from such principles. In view of the many possible embodiments, it will be recognized that the described embodiments include only examples and should not be taken as a limitation on the scope of the invention. Rather, the invention is defined by the following claims. We therefore claim as the invention all possible embodiments and their equivalents that come within the scope of these claims.
123,353
11857853
DESCRIPTION The invention described herein is a convertible strap system for a golf bag. The strap system can be convertible between a single-strap configuration and a double-strap configuration. The strap system can comprise a first strap102, a second strap108, and a back puck100. The back puck100can orient the first strap102and the second strap108in relation to each other in the double-strap configuration. The first strap102can be permanently engaged with the back puck100, whereas the second strap108can be removably engaged with the back puck100. In the single-strap configuration, the second strap108can be disengaged with the back puck100. In the double-strap configuration, the second strap108can be translationally engaged with the back puck100. As illustrated inFIGS.1,12, and13, the first strap102can be discontinuous. The first strap102can comprise a first section104and a second section106. The back puck100can be connected between the first section104and the second section106of the first strap102. The first section104can comprise a first end and a first attachment end110. The first end can be coupled to the golf bag. In some embodiments, the first end is coupled to a back of the golf bag, offset towards a right side of the golf bag. The first attachment end110can be coupled to the back puck100. In some embodiments, the first attachment end110is permanently coupled, attached, sewn onto the back puck100and/or removably attached with snap-fit or other detachable coupling mechanisms. The second section106can comprise a second end and a second attachment end112. The second end can be coupled to the golf bag. The second attachment end112can be coupled to the back puck100. In some embodiments, the second attachment end112is permanently coupled, attached, sewn onto the back puck100, and/or removably attached with snap-fit or other detachable coupling mechanisms. In some embodiments, the second end can be coupled to the back of the golf bag, offset towards a left side of the golf bag. In some embodiments, the first and second ends of the first strap102can be configured to be removable from the golf bag. In some embodiments, the first strap102further comprises a padded portion. The discontinuity of the first strap102prevents the first strap102from rubbing against and creating friction with the second strap108. The second strap108can slide freely through the back puck100, without being hindered by the crossing of the first strap102, which is attached to edges of the back puck100. However, in some embodiments (not shown), the first strap102can be continuous, so long as the second strap108is positioned below the first strap102in a channel, so that the second strap108does not contact the first strap102. The second strap108can be continuous. The second strap108can comprise a first end and a second end. The first and second end can be coupled to the golf bag. The first end of the second strap108can be coupled to the back of the golf bag, offset towards the left side of the golf bag. The second end of the second strap108can be coupled to the back of the golf bag, offset towards the right side of the golf bag. In some embodiments, the first and second ends of the second strap108can be configured to be removable from the golf bag. In some embodiments, the second strap108further comprises a padded portion. As illustrated inFIGS.8and12, the first and second straps102,108comprise a strap width180and a strap thickness182. The back puck100can configure the first and second straps102,108. As illustrated inFIG.2, the back puck100can comprise a central body114, a first side120, a second side122, a front, and a rear. The central body114can comprise a top116and a bottom118. The top116can comprise a first attachment opening152for receiving the first attachment end110of the first strap102. The first attachment opening152can be cut from the central body114such that a plane extending through the first attachment opening152can be orthogonal to a plane extending through the central body114. The bottom118can comprise a second attachment opening154for receiving the second attachment end112of the first strap102. The second attachment opening154can be cut from the central body114such that a plane extending through the second attachment opening154can be orthogonal to a plane extending through the central body114. In some embodiments, the first and/or second attachment end112of the first strap102can be looped through the first and/or second attachment opening154and secured back onto the first strap102by stitching. In some embodiments, the central body114of the back puck100can comprise a logo or emblem190. The logo or emblem190can be embossed, printed, or cut through the central body114. In the illustrated embodiment, the logo190is cut through the central body114. The first and second sides120,122of the back puck100can be configured to removably receive the second strap108. As illustrated inFIGS.2,4, and10-13, the first side120and the second side122can be angled downward from the central body114towards the rear of the puck. In some embodiments, the first and second side122can be angled downward from the central body114at equal angles. As illustrated inFIG.10, the first side120can be angled downward from the central body114at a first side angle160between 10 and 90 degrees. The second side122can be angled downward from the central body114at a second side angle160between 10 degrees and 90 degrees. The first side angle160and/or the second side angle162can be between 10 and 20 degrees, 20 and 30 degrees, 30 and 40 degrees, 40 and 50 degrees, 50 and 60 degrees, 60 and 70 degrees, 70 and 80 degrees, 80 and 90 degrees. Referring toFIGS.3and5, the first side120can comprise a first top corner124, a first bottom corner126, and a first arm132. The first arm132can comprise a top first arm portion136and a bottom first arm portion138. The first arm132can be discontinuous such that the space between the top first arm portion136and the bottom first arm portion138defines a first slit144. The top first arm portion136can connect to and extend from the first top corner124. The bottom first arm portion138can connect to and extend from the first bottom corner138. The second side122can comprise a second top corner128, a second bottom corner130, and a second arm134. The second arm134can comprise a top second arm portion140and a bottom second arm portion142. The second arm134can be discontinuous such that the space between the top second arm portion140and the bottom second arm portion142defines a second slit146. The top second arm portion140can connect to and extend from the second top corner128. The bottom second arm portion142can connect to and extend from the second bottom corner130. The first slit144and the second slit146allow the second strap to be engaged or disengaged from the back puck100. In other words, the first and second slits144,146in the first and second arms132,134, respectively, allow the strap system to convert between the single-strap configuration and the double-strap configuration. The first side120can define a first side opening148, configured to receive the second strap108. The first top corner124, the first bottom corner126, the first arm132, and the central body114of the back puck100can form boundaries for the first side opening148. The first arm132can define an outer edge of the first side opening148. The first slit144can open into the first side opening148. The second side122can define a second side opening150, configured to receive the second strap108. The second top corner128, the second bottom corner130, the second arm134, and the central body114can form boundaries for the second side opening150. The second arm134can define an outer edge of the second side opening150, and the second slit146can open into the second side opening150. Referring toFIGS.3and7, the first side opening148comprises a first side opening width164and a first side opening height168. Referring toFIGS.3and6, the second side opening150comprises a second side opening width166and a second side opening height170. The first side opening width164and the second side opening width166may be the same width. The first side opening height168and the second side opening height170may be the same height. The first side opening width164and second side opening width166are in a range of 20 mm to 30 mm. The first side opening width164and second side opening width166can be between 20 mm and 22 mm, 22 mm and 24 mm, 24 mm and 26 mm, 26 mm and 28 mm, or 28 mm and 30 mm. In some embodiments, the first and/or second side opening widths164,166can be 20 mm, 21 mm, 22 mm, 23 mm, 24 mm, 25 mm, 26 mm, 27 mm, 28 mm, 29 mm, or 30 mm. The first side opening width164and second side opening width166are greater than the second strap width180. The first side opening height168and the second side opening height170are in a range of 2 mm to 8 mm. The first side opening height168and the second side opening height170can be between 2 mm and 3 mm, 3 mm and 4 mm, 4 mm and 5 mm, 5 mm and 6 mm, 6 mm and 7 mm, or 7 mm and 8 mm. In some embodiments, the first and/or second side opening heights168,170can be 2 mm, 3 mm, 4 mm, 5 mm, 6 mm, 7 mm, or 8 mm. Referring toFIGS.3and6-8, the first side opening148and the second side opening150are sized to receive the second strap108. The first and second side opening widths164,166are greater than the second strap width180. The first and second side opening heights168,170are greater than the second side strap thickness182. The first and second side opening widths164,166and heights168,170allow the second strap108to fit comfortably within and slide freely through the first and second side openings148,150. In other words, the first and second side opening widths164,166and heights168,170have values that allow the second strap108to move within the first and second side openings148,150unhindered and unrestrained in the direction from the first side opening148to the second side opening150. This free movement of the second strap108allows the golf bag to self-adjust to a user's posture when the strap system is in the double-strap configuration. As illustrated inFIGS.6and7, the first slit144and the second slit146comprise a slit width172. The slit width172can be measured perpendicularly from a plane tangent to an end of the top arm portion136or140to a plane tangent to an end of the bottom arm portion138or142, respectively. The slit width172is in a range of 0.5 mm to 5 mm. The slit width172can be between 0.5 mm and 0.7 mm, 0.7 mm and 0.9 mm, 0.9 mm and 1.1 mm, 1 mm and 1.5 mm, 1.5 mm and 2 mm, 2 mm and 3 mm, 3 mm and 4 mm, or 4 mm and 5 mm. In some embodiments, the slit width172can be 0.5 mm, 0.6 mm, 0.7 mm, 0.8 mm, 0.9 mm, or 1.0 mm. The slit width172is greater than the second strap thickness182. The first slit144and second slit146allow for insertion and removal of the second strap108from the first side opening148and the second side opening150, respectively. As illustrated inFIGS.6-9, in some embodiments, the first slit144can be closer to the top116than the bottom118of the back puck100, and the second slit146can be closer to the bottom118than the top116of the back puck100. The top first arm portion136can be shorter than the bottom first arm portion138. The top second arm portion140can be longer than the bottom second arm portion142. The position of the first slit144and the second slit146as defined by the lengths of the arm portions affects the ability of the back puck100to retain the second strap108without it slipping out when the golf bag is in the double-strap configuration. As illustrated inFIG.16, when the golf bag is lifted by the second strap108when in the double-strap configuration, the material of the second strap108can constrict within the first side opening148and the second side opening150. Within the first side opening148, the second strap108can constrict towards the first top corner124at the top116of the puck100. The location of the first slit144closer to the top116(and the first top corner124) than the bottom118(and the first bottom corner126) can prevent an edge of the second strap108from slipping out when the strap108is bunched up. Within the second side opening150, the second strap108can constrict towards the second bottom corner130at the bottom118of the puck100. The location of the second slit146closer to the bottom118(and the second bottom corner130) than the top116(and the second top corner128) can prevent an edge of the second strap108from slipping out when the strap108is bunched up. Therefore, the lengths of the top first arm portion136, bottom first arm portion138, top second arm portion140, and bottom second arm portion142can prevent the second strap108from slipping out through the first and second slits144,146. This security helps loosely retain the second strap108within the back puck, so that the second strap108is slidably connected to the first strap102. Referring toFIG.5, the first and/or second slit144,146can be angled with respect to the first and/or second arm132,134, respectively. In some embodiments, the first and/or second slit144,146can be angled roughly parallel to a reference line174drawn from the first top corner124of the puck100to the second bottom corner130of the puck100. In some embodiments, the first and/or second slit144,146can comprise any angle suitable for insertion and removal of the second strap108. In some embodiments, a longitudinal axis178is defined in a direction from the first side120to the second side122, and centered between the top116and bottom118of the back puck, as taken from the rear view. A first slit reference line145runs parallel through the first slit, as taken from the rear view. A second slit reference line147runs parallel through the second slit, as taken from the rear view. The first slit144is angled at a first slit angle θ1, which is measured counterclockwise from the longitudinal axis178to the first slit reference line145. The second slit146is angled at a second slit angle θ2, which is measured counterclockwise from the longitudinal axis178to the second slit reference line147. The first slit angle θ1can be equal to the second slit angle θ2. In some embodiments, the first slit angle θ1and/or the second slit angle θ2have a value of between 0 and 80 degrees. In some embodiments, the first slit angle θ1and/or the second slit angle θ2is between 0 and 10 degrees, 10 and 20 degrees, 20 and 30 degrees, 30 and 40 degrees, 40 and 50 degrees, 50 and 60 degrees, 60 and 70 degrees, or 70 and 80 degrees. The angulation of the first and second slits144and146helps prevent the second strap from inadvertently falling out of the back puck (exiting the first and/or second slit144,146) in the double-strap configuration, while also allowing the second strap to be quickly removed to convert the strap system to the single-strap configuration. The design of the first and second slits144,146allows quick and versatile conversion and configuration of the strap system. The first side opening148and the second side opening150can be configured to removably receive the second strap108of the golf bag. As shown inFIGS.7and11, a linear pathway158can extend through the first side120opening and the second side opening150. In other words, the linear pathway comprises the space directly between the first side opening148and the second side opening150. No part of the back puck100intersects the linear pathway. The pathway comprises a pathway width having the same width as the first side opening width164and second side opening width166. Referring toFIGS.4,5,7, and9, in some embodiments, a channel156can be cut into the central body114. The channel156can run parallel to the linear pathway158. In some embodiments, the linear pathway158runs through the channel156. The channel156can extend from the first side opening148to the second side opening150. The channel156can be as wide as the first side opening148and the second side opening150. The channel156can be cut or recessed into the face of the central body114, such that the plane of the channel156is parallel to the plane of the central body114. The channel156can have a certain depth176. The depth176of the channel156can be less than the thickness of the central body114. In some embodiments, the channel depth176can be between 0 mm and 3 mm. In some embodiments, the channel depth176can be between 0 mm and 0.5 mm, 0.5 mm and 1 mm, 1 mm and 1.5 mm, 1.5 mm and 2 mm, 2 mm and 2.5 mm, or 2.5 mm and 3 mm. The first side120opening, second side opening150, and the channel156of the back puck100are configured to allow free movement of the second strap108along the linear pathway158. In the single-strap configuration, the first strap102can be independent from the second strap108. In other words, the second strap108can be disengaged from the back puck100. The back puck100can be held and fixed between the first and second sections104,106of the first strap102. In the double-strap configuration, the second strap108can be engaged with the back puck100. The second strap108can run along the channel156and/or the linear pathway158cut through the central body114and bounded by the first side opening148and second side opening150of the back puck100. The second strap108is configured to slide along the channel156having no bends, folds, or turns, and without resistance or clamping such that the second strap108is not fixed in position to the back puck100along the linear pathway158between first side opening148and the second side opening150. The sliding movement of the second strap108allows the weight of the golf bag to be automatically distributed (self-adjusted) between both the first and second straps102,108without the user adjusting the length of either strap. In the double-strap configuration, the back puck100restricts the second strap108to some degree in every direction other than the direction of the channel156. By retaining the second strap108adjacent the first strap102, the back puck100keeps the straps oriented in a configuration that (1) can be worn over both shoulders and (2) evenly distributes the weight of the golf bag. In the double-strap configuration the first strap102and the second strap108can be oriented perpendicular to one another by the back puck100. This crisscrossing setup of the first strap102and the second strap108, connected by the back puck100, allows the user to not only easily position the golf bag on his or her back, but also allows the user to walk and move without tangling or shifting the straps102,108into an undesirable position. As described above, the strap assembly can be used in a single-strap configuration, such as is illustrate inFIG.14, or in a double strap configuration, as illustrated inFIG.15. To convert the strap assembly from the single-strap configuration to the double-strap configuration the second strap108is engaged with the back puck100. Referring toFIG.17, engaging the second strap108with the back puck100comprises inserting an edge of the second strap108into the first slit144on the first side120of the back puck100. The second strap108can be then fed fully through the first slit144into the first side opening148, which requires some temporary bunching of the second strap108material. The second strap108can be then allowed to spread out into the first side opening148, and the first arm132holds the second strap108within the first side opening148. Next, another portion of the second strap108can be inserted into the second slit146on the second side122of the back puck100. The second strap108can be then fed fully through the second slit146and secured with the second side opening150in a manner similar to the insertion of the second strap108into the first side opening148. The second arm134holds the second strap108within the second side opening150. Upon completion of the insertion of the second strap108into the first and second side openings148,150, the second strap108can lie along the linear pathway158and experiences no resistance to motion along the linear pathway158. To convert the strap assembly from the double-strap configuration to the single-strap configuration, the second strap108can be disengaged by reversing the above insertion process. The second strap108can be pulled laterally through the first and/or second slit144,146to remove the second strap from the first side opening148and/or the second side opening150. In some embodiments of the convertible strap system, the second strap108can be configured to be fully removable from the golf bag, allowing the user to configure the golf bag more permanently in a single-strap configuration. In these embodiments, the second strap108can be removed to simplify the bag, lighten the bag, and improve aesthetics. The convertible strap system can provide the user with more versatility in how he or she carries the golf bag. The convertible strap system can reduce fatigue from carrying the golf bag by allowing the user to adapt the strap system to the user's needs. In addition, the convertible strap system provides a solution for caddies who desire to carry a golf bag by placing a single strap of each bag on each shoulder. Additionally, the convertible strap system is simple, requiring no tools for the conversion process between the single-strap and double-strap configuration. The method of engaging or disengaging the second strap108with the back puck100can be understood without detailed instructions. All these features make the convertible strap system an effective solution to the need in the art for a convertible strap system.
21,594
11857854
DETAILED DESCRIPTION The following detailed description illustrates embodiments of the disclosure and manners by which they can be implemented. Although the preferred mode of carrying out disclosed systems, endcaps, pallets and methods has been described, those of ordinary skill in the art would recognize that other embodiments for carrying out or practicing disclosed systems, endcaps, pallets and methods are also possible. It should be noted that the terms “first”, “second”, and the like, herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. Further, the terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced item. Modern swingable sports equipment or implements, such as rackets for tennis, racquetball, squash, badminton, pickleball and padel as well as table tennis paddles typically include a head or blade portion coupled to a bar handle portion. Performing the swing with a conventional bar handle requires a user grip the handle with a considerable amount of gripping force to prevent the racket from sliding or twisting. Known attempts to improve swing performance while allowing a user a more relaxed grip on the handle aim to support a user's fifth metacarpal with a broad extension. Other known attempts aim to lock the user's hand to the handle with an extension which curves through an arc of greater than 90 degrees. Embodiments of the disclosure provide an improvement for sporting equipment handles. Embodiments of the disclosure substantially eliminate, or at least partially address, problems in the prior art, preventing sliding or twisting of a user's hand on a racket handle by supporting a user's fifth proximal phalanx with a relatively narrow extension. Embodiments of the disclosure may also provide a pivot point for swinging of the sporting equipment vertically. Embodiments of the disclosure can be applied to many swingable items, including but not limited to rackets for tennis, racquetball, squash, badminton, pickleball and padel as well as table tennis paddles. Additional aspects, advantages, features and objects of the disclosure will be made apparent from the drawings and the detailed description of the illustrative embodiments construed in conjunction with the appended claims that follow. It will be appreciated that described features are susceptible to being combined in various combinations without departing from the scope of the disclosure as defined by the appended claims. Handle assemblies of modern rackets typically includes an inner shaft or core, a pallet and a grip. In some cases, such as tennis, rackets include a throat portion coupling the handle portion to the blade portion. The pallet is an outer region which is typically positioned or applied over the shaft. This type of handle assembly may be terminated by a cap or endcap also commonly referred to as a buttcap. FIGS.1-5&11-13illustrate an example endcap100for a sporting implement. Endcap100includes a base portion (110and150) and a transverse extension160projecting from the base portion. Extension160includes a tip distal from the base which may be rounded. The base portion has a bottom surface130with a longitudinal or central axis101normal thereto (FIG.12). A receptacle170or hollow interior opposite bottom surface130is configured to receive a portion of the sporting implement such as handle pallets. Receptacle170may be configured with the same internal shape as an exterior surface of the portion of the sporting implement to which it will be applied, may it be a handle pallet or a handle shaft. In an example, extension160includes an inside surface164facing generally towards central axis101and an outside surface162opposite the inside surface. Inside surface164faces towards central axis101in contrast with facing in the same direction as central axis101or away from central axis101and may also be considered to face away from bottom surface130. Outside surface162faces generally away from central axis101in contrast with facing towards central axis101or in the same direction as central axis101and may also be considered to face generally in the same direction as bottom surface130. Inside surface164may be generally smooth without corners or discontinuities that might cause discomfort to the hand of a user. Further, inside surface164may be concave while outside surface164is convex (FIGS.3&4). A grip such as a tape may be wrapped around extension160to adjust surface texture. In an example, the center of curvature of extension160is directly above the connection point of extension160to base portion of endcap100. Similarly, a tangent line to extension outside surface162and perpendicular or transverse to central axis101, intersects outside surface162at the connection point of extension160to base portion of endcap100(FIGS.3-5). Extension160is configured such that, during swinging of the sporting implement with a hand of a user gripping around the base portion (110and150), the extension contacts an exterior lateral portion and upper portion of fifth, fourth or third proximal phalanxes of the user's hand (FIG.27). The hand is constrained in both the transverse and longitudinal aspects while being allowed rotation of a first metacarpal of the hand away from central axis101. In an example, extension160curves towards central axis101. Referring toFIG.5, a continuous inside surface164sweeps through an arc away from the rigid base portion bottom surface130. Critically, this arc is subtended by angle α which measures less than 90 degrees. With this arrangement, extension160does not obstruct change of grip on the sporting implement and/or endcap100, does not prevent swinging of the sporting implement and does not prevent wrist flicking of the sporting implement, particularly, when the sporting implement is a racket. In an example, extension160sweeps through an arc such that a measures of 60 degrees. Endcap100may take any of a variety of forms suitable for use in association with a sporting implement. In an example, endcap100may be shaped with eight bevels (flats) or elongate, generally planar regions. Eight elongate ridges are formed between the bevels along the length. In an example, endcap100exhibits an octagonal cross-section with a flare at the base. In a further example wherein endcap100is used in association with a racket, extension160is centered on the second, fourth, sixth or eighth bevel with the racket webbing and/or blade aligned with the first and fifth bevels. FIGS.6-10represent a variety of alternative cross-sections section plane A ofFIG.4may yield. The cross-sectional view ofFIG.6reflects a non-zero width ω1at the longitudinal cross-section maximum height. The cross-sectional view ofFIG.7reflects a width ω1of zero at the longitudinal cross-section maximum height. The cross-sectional view ofFIG.8reflects a non-zero width ω1at the longitudinal cross-section maximum height which is less than the width ω2at the longitudinal cross-sectional minimum height. While not a preferred form, the cross-sectional view ofFIG.9reflects a non-zero width ω1at the longitudinal cross-section maximum height which is greater than the width ω2at the longitudinal cross-sectional minimum height. While also not a preferred form, the cross-sectional view ofFIG.10reflects a non-zero width ω1at the longitudinal cross-section maximum height and a width ω2of zero at the longitudinal cross-section minimum height. With reference toFIG.11, rigid transverse extension160has a critical width at a longitudinal cross-section maximum height that is no greater than 75% of a width δ of the base portion. At these relative dimensions, the width of surface164of the transverse extension which contacts the user's fifth finger does not exceed the length of the fifth proximal phalanx. As such, pressure on the joints of the finger is avoided to improve comfort and reduce risk of injury. Further, this width will not impede swift change of orientation of the grip on the endcap. In another example wherein endcap100is used in association with a racket, extension160is rotated from a plane of a racket blade by between 50 and 90 degrees around central axis101. In a further example, extension160is rotated from the plane of the racket blade by about 70 degrees. In another example, the degree of rotation from the plane of the racket blade may be varied by degree of rotation of endcap100relative to a handle core. Endcap100may be formed from any of a variety of rigid, lightweight materials including but not limited to polyurethane and other polymers, nylon and composite materials such as graphite sheets or grafil. At least one tab121(FIG.1) may be provided to receptacle170and configured to engage one of a series of notches provided to an exterior surface of a handle pallet. The base portion may include first110and second150mating shells.FIG.13illustrates an exploded view of the example endcap ofFIGS.1-4as it may be coupled with an example pallet or pallet assembly300. A clip200may be provided to first and second mating shells110and150to secure the same together on handle pallet300with ridges121engaged with slots321. With shells110and150aligned and placed against the pallet so as to mate with slots321through ridges121, clip200may be inserted through a slot in the bottom of the base portion until constraining opening210is clicked into place around mated half posts (not visible) of the respective shells110and150. Endcap100may be coupled or attached to a pallet or pallets in any of a variety of alternatives to tabs of receptacle170engaging with slots on a pallet or pallets. In another example, endcap100and a pallet or pallets may be provided with notches or holes for engaging with separate removable tabs or pegs. In another example, the base portion may further include, extending from a rim of receptacle170, one or more resilient arm members or tabs with abutment surfaces configured to engage one or more slot perforations of the handle pallet. In this example, the resilient arms clip the endcap to the handle pallet. In another example, the cooperative parts are reversed such that slots are provided on/in the endcap and arms/tabs are provided to the handle pallet. In another example, the endcap may be coupled to the handle pallet by a bolt inserted through one or more holes through the base portion transverse to the central axis101. WhileFIGS.1-5&11-13reflect a plane of split190of endcap halves110and150on the fourth and eighth bevels and being spaced from extension160by one bevel, endcap halves110and150may be split along any bevels with the split being spaced from extension160by more or fewer bevels. In a further example, split plane190may be between two edges of endcap100. In another example, endcap100may be divided into more than two pieces. FIGS.14-17illustrate an example pallet system400configured for positioning over a shaft of a sporting implement. Pallet system400includes a prism configured to house and grip a handle core, hairpin or shaft of the sporting implement and a transverse extension460. The prism has a first end431with a center, a distal second end435with a center and a central axis401defined between the center of the first end and the center of the second end. An exterior surface defined between first and second ends surrounds central axis401. First end431is configured to receive an endcap. Extension460is coupled to the prism at a base and includes a tip distal from the base. The tip may be rounded. Extension460projects from the exterior surface near first end431and is configured such that during swinging of the sporting implement with a hand of a user gripping around the prism, the extension contacts the hand on an exterior lateral portion and upper portion of a fifth, fourth or third proximal phalanx of the hand. The hand is constrained in both transverse and longitudinal aspects while being allowed rotation of a first metacarpal thereof away from central axis401. In an example, extension460may further include an inside surface464facing generally towards second end435and/or central axis401. An outside surface462opposite inside surface464faces generally away from second end435and/or central axis401. Inside surface464may be generally smooth without corners or discontinuities that might cause discomfort to the hand of a user. In another example, extension460may have a concave surface464facing towards the central axis401. In an example, the center of curvature of extension460is directly above the connection point of extension460to one of the two half shells410and450. Similarly, a tangent line to extension outside surface462and perpendicular or transverse to central axis401, intersects outside surface462at the connection point of extension460to half shell410or half shell450(FIGS.16&17). In an example, extension460curves towards the central axis401. In another example, extension460curves from first end431towards second end435. Referring toFIG.18, a continuous inside surface464of extension460sweeps through an arc towards second end435. Critically, this arc is subtended by angle β which measures less than 90 degrees. With this arrangement, extension460does not obstruct change of grip on the sporting implement and/or pallets and/or pallet system and/or the prism. Further, extension460does not prevent swinging of the sporting implement and does not prevent wrist flicking of the sporting implement, particularly, when the sporting implement is a racket. In an example, extension460sweeps through an arc such that β measures less than 60 degrees. FIGS.19-23represent a variety of alternative cross-sections plane B ofFIG.17may yield. Similar to the above-mentioned cross-sections discussed with reference toFIGS.6-10, the cross-sectional view ofFIG.19reflects a first non-zero width ω1at the longitudinal cross-section maximum height, the cross-sectional view ofFIG.20reflects a width ω1of zero at the longitudinal cross-section maximum height and the cross-sectional view ofFIG.21reflects a non-zero width ω1at the longitudinal cross-section maximum height which is less than the width ω2at the longitudinal cross-sectional minimum height. While not a preferred form, the cross-sectional view ofFIG.22reflects a non-zero width ω1at the longitudinal cross-section maximum height which is greater than the width ω2at the longitudinal cross-sectional minimum height. While also not a preferred form, the cross-sectional view ofFIG.23reflects a non-zero width ω1at the longitudinal cross-section maximum height and a width ω2of zero at the longitudinal cross-section minimum height. With reference toFIG.24, rigid transverse extension460has a critical width at a longitudinal cross-section maximum height that is no greater than 75% of a width c of the prism. At these relative dimensions, the width of the surface of the transverse extension which contacts the user's fifth finger does not exceed the length of the fifth proximal phalanx. As such, pressure on the joints of the finger is avoided to improve comfort and reduce risk of injury. Further, this width will not impede swift change of orientation of the grip on the sporting implement and/or pallets and/or pallet system and/or the prism. In an example wherein pallet system400is used in association with a racket, extension460is centered on the second, fourth, sixth or eighth bevel with the racket blade aligned with the first and fifth bevels. In another example wherein pallet system400is used in association with a racket, extension460is rotated from a plane of a racket blade by between 50 and 90 degrees around central axis401. In a further example, extension460is rotated from the plane of the racket blade by about 70 degrees. In another example, the degree of rotation from the plane of the racket blade may be varied by degree of rotation of pallet system400relative to the handle core. Pallet system400may be irremovably connected to the shaft during manufacturing, or the prism may be provided as two half shells410and450configured for coupling to the shaft either by adhesives or by other means. FIG.25illustrates an exploded view of the example pallet system ofFIGS.14-18as it may be coupled with an example handle core. First410and second450mating shells may include teeth421which engage with teeth621provided to an exterior surface of handle core600of the sporting implement. Wrapping a grip (not shown) around the first and second shells410and450may secure the same to the handle core of the sporting implement. The grip, may have various degrees of tackiness. Pallet system400may be formed from any of a variety of rigid, lightweight materials including but not limited to polyurethane and other polymers, nylon and composite materials such as graphite sheets or grafil. Pallet system400may take any of a variety of forms suitable for use with a sporting implement and/or handle core. In an example, pallet system400may be shaped with eight bevels adjacent to each other. Eight elongate ridges are formed between the bevels along the length. In an example, pallet system400exhibits an octagonal cross-section. WhileFIGS.14-18&24-26reflect a plane of split490of half shells410and450on the fourth and eighth bevels and being spaced from extension460by one bevel, half shells410and450may be split along any bevels with the split being spaced from extension460by more or fewer bevels. In a further example, split plane490may be between two edges of pallet400. In another example, system400may be comprised of more than two shells. FIGS.1-26illustrate non-limiting example manners of providing extensions to sporting equipment handles. The disclosure anticipates other manners of providing extensions to handles. In another example, a transverse handle system includes a thin-walled annulus including a bottom and a top with a height therebetween. The annulus includes a central opening. A central axis directed through a center of the central opening extends in a direction of the height. A transverse extension projects from the thin-walled annulus curving from the bottom towards the top. The thin-walled annulus is configured to partially encompass a handle pallet which surrounds a handle core, shaft or hairpin of the sporting implement. The thin-walled annulus may be further configured to encompass a sporting implement handle endcap which, together with a handle pallet, surrounds the handle core, shaft or hairpin. The annulus may be formed from first and second mating shells which may be semi-annular. Through-holes in each of the first and second mating shells facilitate coupling the assembled first and second mating shells to form the annulus. Centers of the through-holes are aligned along an axis transverse to the central axis and are configured to receive a fastener for coupling the first and second mating shells around the handle pallet. In an example, the fastener is a crossbolt or cross-pin. Additionally/alternatively, the annulus may include a concave interior surface configured to engage with a convex exterior surface of the handle pallet or an endcap. The annulus may be formed from a resilient, pliable material such that it can be slipped over the convex surface of the handle pallet or endcap where it will grip the same with the mating concave surface. In an example, the resilient annulus is formed from a rubber. As with above-mentioned embodiments, the extension contacts an exterior lateral portion and upper portion of the fifth, fourth or third proximal phalanxes of the user's hand and constrains the hand in both the transverse and longitudinal aspects while allowing rotation of a first metacarpal of the hand away from the central axis. The extension may take any of a variety of dimensions and/or shapes suitable for constraining a hand of a sporting equipment user including but not limited to those described above. In yet another example, a sporting implement shaft with an exterior surface extending between first and second ends around a central axis also includes a transversely-directed socket formed in the exterior surface. A transverse extension has a plug configured for receipt in the transversely-directed socket to secure the extension to the sporting implement shaft. One or more pallets and/or an endcap may be provided with an opening, channel or notch to accommodate the extension. With the one or more pallets secured around the shaft, the extension projects therethrough for contact with a user's hand. Similarly, with the endcap secured to the one or more pallets, the extension projects therethrough. As with above-mentioned embodiments, the extension contacts an exterior lateral portion and upper portion of the fifth, fourth or third proximal phalanxes of the user's hand and constrains the hand in both the transverse and longitudinal aspects while allowing rotation of a first metacarpal of the hand away from the central axis. The extension may take any of a variety of dimensions and/or shapes suitable for constraining a hand of a sporting equipment user including but not limited to those described above. In yet another example, the extension may be part of a sporting implement formed as one piece and additionally including one or more of a primary handle, a sporting implement shaft and a sporting implement blade. FIG.27illustrates example sporting equipment700in use in association with an example extension160in accordance with embodiments of the disclosure. With a hand of a user gripping around the handle pallet, example extension160contacts an exterior lateral portion and upper portion of the fifth proximal phalanx of the hand. The hand is constrained in both the transverse and longitudinal aspects while being allowed rotation of a first metacarpal of the hand away from the central axis. FIG.28illustrates an additional or alternative transverse extension for a sporting implement800. Extension860includes a branch yielding first862and second868extensions relatively rotated about the central axis and/or around a perimeter surrounding the bottom surface. Extension860may suitable for use in association with at least disclosed endcaps and pallets, for example, as an alternative to extension160or extension460. Embodiments of the disclosure are susceptible to being used for various purposes, including, though not limited to, enabling users to prevent their hands from sliding or twisting on a racket handle while reducing the amount of gripping force required. In addition to other sporting equipment, grip of rackets for tennis racquetball, squash badminton, pickleball and padel as well as table tennis paddles may be improved. Modifications to embodiments of the disclosure described in the foregoing are possible without departing from the scope of the disclosure as defined by the accompanying claims. Expressions such as “including”, “comprising”, “incorporating”, “consisting of”, “have”, “is” used to describe and claim disclosed features are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural.
23,286
11857855
DETAILED DESCRIPTION FIG.1shows a top view of the racket head1of a tennis racket. The longitudinal diagonal2through the racket head contour with the greatest length and the transverse diagonal3through the racket head contour with the greatest width are also shown in the image of the racket head1. The longitudinal diagonal2and the transverse diagonal3intersect at the point of intersection4. Furthermore, the midpoint5of the longitudinal diagonal2is identified. The offset between the point of intersection4and the midpoint5of the longitudinal diagonal2(“center offset”) was determined to be 9.33 mm in the example shown. Furthermore, inFIG.1a circle6having a diameter smaller than the length of the transverse diagonal3is depicted around the established point of intersection4. As is readily apparent fromFIG.1, this graphically visualized circle6clearly illustrates the deviation of the racket head shape from a circular shape. In order to better quantify this deviation, a plurality of racket head contour segments8can be defined according to the invention, each of which extends from the circle6to the racket head contour and is bounded by an arc of the circle, a portion of the racket head contour and two sides7extending in a radial direction with respect to the point of intersection4, wherein the angle between the two sides7of a racket head contour segment8is the same for all racket head contour segments. In the example according toFIG.1, this angle is 15°. If the length of the sides7is now established for each racket head contour segment8, this length is a quantitative measure of the deviation of the racket head shape from a circular shape. According to the invention, the individual racket head contour segments8can be classified on the basis of the determined length of the sides7. This classification can then also be graphically visualized, for example by coloring the sides7of the racket head contour segments8. This coloring was carried out in the example ofFIG.1, but can only be seen to some extent due to the black-and-white representation. It should be clear, however, that coloring using an appropriately selected color spectrum can clearly visualize for the viewer the measure of the deviation of the racket head shape from a circular shape as discussed here. Instead of the length of the sides7of the racket head contour segments8, their area can also be determined. The racket head contour segments8can then be classified and colored according to the determined area, for example. InFIGS.2B to10B, corresponding classifications for the racket head shapes according toFIGS.2A to10Aare visualized by means of different shades of gray, each indicating the area of the racket head contour segments. The scaling can be seen on the right side of each ofFIGS.2B to10B, according to which the shades of gray from white to black classify the following area ranges (each measured in cm2): <3.0-3.0; 3.0-4.5; 4.5-5.9; 5.9-7.4; 7.4-8.9; 8.9-10.3; 10.3-11.8; 11.8-13.3; 13.3-14.7; 14.7-16.2; 16.2-17.7; 17.7-19.1; 19.1-20.6; 20.6-22.1; 22.1-23.5; 23.5-25.0; 25.0->25.0. As revealed by a comparison ofFIGS.2A to10A, the individual racket head shapes are in some cases visually hard to distinguish from one another. In fact, however, the differences with respect to the parameters according to the invention are considerable, as is apparent fromFIGS.2B to10B. Since these differences also have a corresponding influence on the playing characteristics of the corresponding ball game racket, the objective and quantifiable characterization made possible according to the invention on the basis of the parameters according to the invention is of high value for the player. From the point of view of maximum error forgiveness, each player will try to hit the ball wherever possible at the point of maximum racket width. However, as Brody points out (cf. H. Brody, Medicine and Science in Tennis, vol. 8, no. 1, April 2003), the point of “maximum ball acceleration” varies depending on the kinematics of the swing or stroke as well as depending on the relationship between the velocities of the incoming ball and the struck ball. For example, a volley, in which there is a strong translational movement of the racket and virtually no rotational component and in which the speed of the ball is significantly higher than the speed of the racket, should ideally be hit rather in the lower area of the hitting surface in order to achieve both good acceleration and low rotation around the center of gravity due to the contact with the ball. In contrast, a forehand topspin stroke performed in a modern manner, in which strong acceleration is achieved primarily through the use of the wrist as the last link in the kinematic chain and the stroke movement therefore contains a strong rotational component, should be hit wherever possible in the upper area to achieve maximum ball acceleration. Thus, for a player who acts a lot from the baseline with strokes that have a high proportion of rotational movement, a racket exhibiting a large center offset, such as the racket inFIG.3B(HEAD™ Pyramid Tour 630 racket) or the racket inFIG.7B(Wilson® Burn FST 99 racket), would be particularly suitable. In contrast, for a player who plays a lot of volleys, a racket exhibiting a small or even negative center offset, such as the racket inFIG.5B(Völkl™ 10 PB racket) or the racket inFIG.9B(Yonex™ VCore PRO 97 racket), would be particularly suitable. What is noticeable here is that optically the two rackets inFIG.3B(HEAD™ Pyramid Tour 630 racket) andFIG.9B(Yonex™ VCore PRO 97 racket) do show certain similarities. Both would be referred to as teardrop head shape in current usage. However, when determining the respective center offset, these values are extremely different, namely 44.71 mm and 5.32 mm. Visually, by looking at the racket head, this value of the center offset can thus be determined only with great difficulty or insufficiently. By using the racket head contour segments, the differences can also be seen very clearly when comparing the two head shapes. Even the basic strokes can be performed in various different ways. For example, some players play rather “smoothly”, i.e., without generating a lot of ball spin. This is expressed in a stroke movement in which the racket is swung relatively horizontally through the ball. Since in this case the racket movement is relatively similar to the direction of motion of the incoming ball (only directly opposite), the risk of hits outside the longitudinal axis of the racket is rather low in this case. Therefore, these players typically prefer a racket with a low ratio of width to length. This style of play and the associated rackets were particularly strongly represented in the 1990s, and the racket inFIG.5B(Völkl™ 10 PB racket) is still a representative of this category. Meanwhile, most players play their basic strokes with a lot of spin generation by accelerating the racket in contact with the ball not only horizontally but also strongly vertically (mainly by forearm rotation or the wrist as the last link in the kinematic chain). The players practically hit or wipe past the ball at the point of impact and thus achieve a high spin generation. Naturally, when the movement is performed in such a way, i.e. when the direction of movement of the racket in contact with the ball is clearly different from the direction of movement of the incoming ball, there is a greater risk of hitting the ball not on the longitudinal axis of the racket but more to the left or right of the longitudinal axis of the racket (off-center). In the case of extreme forms of these strokes, even professional players can be seen time and again hitting the ball with the racket frame, i.e. the ball is hit on the racket so far to the side that the frame is “in the way”. For these types of players, it is thus clearly more important to have a larger width-to-length ratio of the racket head, as is shown, e.g., by the racket inFIG.9B(Yonex™ VCore PRO 97 racket). Hence, depending on the movement kinematics with which the player hits, with which tactics he/she plays (a lot of volleys or many basic strokes) and with how much spin he/she plays, very different ideal racket head shapes can result for a plurality of players having the same playing strength, but so far the differences between these racket head shapes can only be insufficiently described with the known methods and data. Additionally or alternatively, the evolute of the racket head shape can be established according to the invention, as already mentioned. This is schematically illustrated inFIG.11. Accordingly, the normal vectors9to a plurality of points along the racket head contour of the racket head1are identified, wherein the end point10of each normal vector9corresponds to the center of the associated circle of curvature. The plurality of end points of the normal vectors then form the evolute according to the invention, wherein this plurality of end points can optionally also be connected to form a curve. InFIGS.12and13, such evolutes are shown together with the racket head contours for the “HEAD™ Graphene Touch Radical MP 2” tennis racket and “Yonex™ Ezone DR100” tennis racket, respectively. Alternatively or additionally to the actual evolute, the length of each normal vector (i.e., the radius of the respective associated circle of curvature) as a function of the corresponding angular position on the racket head contour can also be represented as a curve, as is the case at the bottom of each ofFIGS.12and13. Even though the two racket head contours inFIGS.12and13reveal to the naked eye that the two rackets have different head shapes, it should be clear that the graphical representations of the evolutes allow far more precise and quantitative conclusions to be drawn.
9,833
11857856
DETAILED DESCRIPTION Backstop50(50a,50b,50cand50d),FIGS.1-22, is for placement around a goal52having a goal height54to stop sport projectiles from traveling beyond an outer edge of the goal if the projectile misses the goal. Goal52has a frame56supporting a goal net58. Goal net58has a goal net mesh60. As shown inFIGS.1-3, backstop50comprises a material implement62. Material implement62has a top edge64, side edges66and a bottom edge68. Backstop50comprises a goal acceptance aperture70(70a,70b) located within the material implement substantially central to bottom edge68and extending towards top edge64. Backstop50further comprises an attachment mechanism72that secures material implement62to goal52. Backstop50also comprises at least one bracket90operable to attach the backstop to goal frame56. A at least one support structure76extends from each bracket90to support material implement62so that when the material implement is secured to goal52the material implement extends beyond the outer edge of the goal. The components making up two exemplary embodiments of backstop50are shown inFIGS.4aand4b. Material implement62maybe at least one of a single and multi-piece system of fabric or fabric sections. In one embodiment material implement62may be a backstop net mesh having backstop net mesh openings78. Backstop net mesh openings78may also act as lacing openings80. In other embodiments, material implement could be made of a light weight material such as banner mesh materials formed from polyester or vinyl fibers. Material implement preferably has a reinforcing edge82along top edge64and side edges66. Reinforcing edge82provides a strong attachment location for support structure76having a cap144and hook146attach to the reinforced edge. Goal acceptance aperture70may be an opening70a(FIG.4a) or one or more slits70b(FIG.4b) within material implement62as used in backstop50,50b. Slits70bextend towards top edge64from bottom edge68. Slits70bextend part way through material implement62. Material implement62may exist between the slits,FIG.4b. Adjacent opening70aor slit70b, near aperture edge71, are provided lacing openings80. Lacing openings80may be backstop net mesh openings78. Lacing openings80may be openings within the fabric of material implement62. Lacing openings80may also be closed loops of material secured to the edge of the goal acceptance aperture70. Attachment mechanism72may be a lacing system84(FIG.5a) including lacing cord86for threading through any number of lacing openings80and goal net mesh60to secure material implement62to goal52. Attachment mechanism72may also be a plurality of hooks87(FIG.5b) attached to goal frame56and operable to attach material implement62to goal frame56by hooking the material implement directly to the hooks or lacing the material implement to the hooks using lacing cord86. Attachment mechanism72may also be a net-to-net attachment mechanism (FIG.5c) where the lacing system84includes a lacing cord86for threading through any number of lacing openings80between goal net mesh60to secure material implement62to goal52. Attachment mechanism72may further be a closed loop attachment mechanism (FIG.5d) where the lacing system84includes a lacing cord86for threading through any number of closed loops154on the edge of material implement62that are laced to goal net mesh60to secure material implement62to goal52. When a lacing cord86is used as part of the attachment mechanism72, the lacing cord is preferably a polymeric tube or elastomeric tube, but may be a solid cord. The elastomeric tube preferably has an outer tube diameter, wherein a length equal to the outer diameter of the elastomeric tube can stretch or flex three to fifteen times the outer diameter. When lacing cord86is a tube, the lacing cord may also include the tube connectors. Connectors88(tube connectors or cord connectors) are used between sections of tube or cord to extend the length of the lacing cord. The connectors are also used to change cord types for different flexibility or other properties within a length of cord. The connectors are also used to add angled, T, X or Y connections to the tube or cord. Connectors88may be internal connectors, external connectors or other types of fittings. Bracket90,FIGS.6-14, includes a body portion92having a top94, a bottom96and a frame acceptance face98. A channel100is located along the frame acceptance face98from top94to bottom96. Channel100is for accepting goal frame56to which bracket90will be secured to. Body portion92has at least one stationary hook102adjacent a first side of channel100and an adjustable hook104adjacent the opposite side of the channel. Bracket90further includes a strap106running between station hook102and adjustable hook104to adjustably hold goal frame56securely within channel100. In most embodiments, a pair of brackets90is preferred with one bracket for attachment to either side of goal frame56. Brackets90are preferably ambidextrous brackets in that that can be used on both the right side and left side of goal frame56interchangeably. Although bracket90is shown for use in securing support structure76to goal frame56, it is understood that this bracket may have uses in other sporting applications or even other fields outside of sporting. Channel100may include both a tapered flange portion108and a recessed portion110. Tapered flange portion108is for directly engaging goal frame56. Recessed portion110is for accepting goal frame protrusions, such as net lacing bars, hooks, net fabric, etc. By providing a place for the goal frame protrusions to reside within, the flanged portion of channel100can make direct contact with goal frame56without being obstructed by the protrusions. In some embodiments, channel100is a v-shaped channel. In some embodiments, tapered flange portion108has a compliant insert112within the tapered flange to provide for a more compliant interface between bracket90and goal frame56that makes it less likely that the bracket will slip or turn during use. Stationary hook102adjacent the first side of channel100may be a plurality of stationary hooks spaced along the first side. Stationary hooks102are sized to accept a looped portion of strap106. Stationary hooks102are used for rough adjustment for connecting bracket90to different diameter goal frames56. Strap106is looped over whichever stationary hook best fits the size goal frame that bracket90is being attached to. Adjustable hook104adjacent the opposite side of channel100is a hook element that is slidably engaged with adjustment channel114and has a worm gear116to slide the hook element along the adjustment channel,FIG.14. When strap106is looped over adjustable hook104, knob118is turned to move the adjustable hook along adjustment channel114. Adjustable hook provides fine adjustment of the strap and determines the amount of force holding goal frame56. Strap slot117may be provided between the channel and at least one stationary hook. Goal net slots119are for horizontal goal net strings to pass through, so they do not get pinched between compliant inserts112and goal frame56. Bracket90may further include struts120that reduce possible rotation of the bracket around the goal frame56. Details of strut120are shown inFIGS.15a-16c. Strut120includes an ambidextrous strut rod122having a first strut rod connection124extending on the bracket90. Strut rod122has a strut rod length and is formed to fit a second strut rod connection126. At second strut rod connection126, a strut rod connector128is provided for connection to goal frame56. Strut rod connector128has a frame engagement section130that can be secured to goal frame56with fasteners132such as zip ties. The shape of strut rod122is such as to allow for a connection of bracket90to goal frame56that is at 90-degrees to the frame to eliminate rotation around the goal frame. Bracket90and strut rod connector are engaged on both vertical and horizontal portions of goal frame56. One or more support structures76extend from each bracket to support material implement62so that the material implement that is secured to goal52extends beyond the outer edge of the goal. Support structure76may consist of a variety of support shapes that extend material implement upwards above goal52and outwards from the goals sides. Support structure76may be in the form of support rods. Support structure76may be a solid rod, a rod that is several pieces of rod that fit together, a hollow multi-piece rod with an elastic cord on the interior, etc. Support structure76may be fitted with a cushioning sleeve77on all or part of the exterior, the cushioning sleeve helps to soften the impact force from sport projectiles that may hit the support rod when the projectile misses goal52. Cushioning sleeve77may be made of a resilient material such as a foam tube that is slipped over the support structure. Support structure76could be a single support structure, but is preferably a pair of support structures, one on each side of goal52. In one embodiment,FIG.17, support structure76is a T-shaped rod134extending from bracket90. One support hole would be provided on the top of each bracket for holding support structure76. In another embodiment,FIG.1, support structure76is two rods, interior rod136and exterior rod138. In this embodiment, interior rod136is secured in bracket90by interior rod support hole140and exterior rod support hole142, respectively. Bracket90may have two support holes, one on each side of the channel to accommodate the support rods depending on whether the bracket is used as a right side bracket or a left side bracket. In this manner one bracket90can be both a right side bracket and a left side bracket, reducing the need for manufacturing two different types of brackets. At the end of each support structure76is a cap144and hook146that attaches to material implement62,FIG.18. Backstop50is adaptable to fit various size goals via an adjustable side weight system, which is aided by the use of side weights150,FIGS.19a-22. Side weights150stretch and hold the side sections of material implement62to be in contact with the ground. When a sport projectile hits the sides of backstop50, the side weights150keep the material implement from being pushed backwards. For goals of different goal height54, side weights150may be attached at different heights of the side sections of material implement62to keep those sections taught.FIGS.19a-19dillustrate how the same backstop50can be integrated to fit two different goals, respectively a lacrosse goal and a hockey goal. In general, side weights150are elongated side weights. Side weights150have a weight length152with a plurality of side weight closed loops154disposed along the weight length. In one embodiment,FIGS.20a-20d, side weight150is a removable weight156within a sleeve158having a sleeve length with lacing closed loops154attached along the length of the sleeve. Removable weight156can be securely held within sleeve158by folding the open end of the sleeve over and fastening the folded part with fastener160. Fastener160may be Velcro®, a zipper, etc. In one embodiment, side weights150are attached by wrapping backstop net around the weight and lacing the backstop net mesh openings78with weight lacing cord86,FIG.21a. In another embodiment, side weights150can be attached by lacing closed loops154to the goal net mesh,FIG.21bandFIG.1. Generally, material implement62has a plurality of rows of lacing openings extending upward from the bottom edge towards the top edge. These lacing openings may be backstop net mesh openings155,FIG.1, or more general weight lacing openings162formed within the material implement of backstop (50,50d),FIG.22. Side weights150may be secured to any row of lacing openings80to secure the side weights to the material implement at varying heights above the bottom edge to match the goal height of the goal. Depending on the type of fabric from which material implement is fabricated, indicia164may be provided for advertisement purposes on the backstop (50,50d),FIG.22. Because backstop50has a large area when deployed, the backstop can act like a sail and have large wind forces imparted upon the backstop. Also, because the outer edges of the backstop50extend a great distance beyond goal52, a sport projectile hitting the outer edges of the backstop may create large lever forces that can act to move or tip the goal if it is not tethered to the ground. To mitigate these circumstances, counter weight166as shown inFIGS.2and3, may be provided to keep the backstop and goal structure stable. Alternatively, securing stakes168may also be used to secure the goal and backstop to the ground. Such a staking system may also include an elastic or silicone cord loop that can be wrapped around the lower corners of the net that is then be hooked to securing stakes168. The cord helps absorb shock to the net and helps prevent the stake from pulling out of the ground. An exemplary process for attaching backstop50to goal52is to first unwrap and lay all components out on the ground as shown inFIGS.4aand4b. Brackets90are then attached to each side of goal52on the goal frame56in the range of 4″ to 6″ below the crossbar,FIGS.15a15b. First, bracket strap106is fed through an opening in the goal's mesh netting strings, horizontal net strings fit through goal net slots119on bracket90so that the net strings do not become pinched between the compliant inserts112and goal post56. Straps106are adjusted for proper fit to the goal frame diameter by hooking the strap to the appropriate stationary hook102. The opposite strap end is looped around adjustable hook104. Knob118is then turned to drive the worm gear116in the threaded slidable cleat nut105to draw strap106tight and pull bracket106firmly against goal frame56. Optional struts122are secured in place by inserting one end of the strut into the center offset hole115in the back of the bracket90. The opposite end of strut122locates the position of strut connector128on goal frame56. Strut connector128is fastened with fasteners132at that location on the goal frame cross bar. Strut122is inserted into the female receiver of the strut connector128. Exterior rods138with end caps144and hooks146are attached to each of the upper most corners of the material implement62. Interior rods136attach to material implement62at the appropriate distance on the inner top edge. Support structures76(interior rods136and exterior rods138) are then used to lift material implement62up so that the material implement is placed around the goal so that goal acceptance aperture70is fitted around the outer edge of the front of goal52. Each of the exterior rod138bottom ends are then inserted into bracket90at their respective exterior rod support holes142, which will stretch and hold the material implement in place up and around goal52. Interior rods136are then inserted into the interior rod support holes140to support the middle upper edge portion of the material implement. Edges of acceptance aperture can be installed in three ways. 1) Aperture edge71is left to drape on the backside of the goal frame52on its netting. 2) Aperture edge71is draped in front of the goal frame52then attached by threading lacing cord86through the backstop mesh openings78and through goal net mesh60openings as to join the two materials. 3) Aperture edge71is draped behind the goal frame52and material implement62is attached by threading lacing cord86through backstop mesh openings78and through goal net mesh60openings as to join the two materials. Lacing cord86is passed through lacing openings80of the backstop and goal net mesh60of goal52. Side weights150are then laced at the appropriate height on each side of goal52. Optional counter weight166is deployed to rest on top of the back side of goal52or goal net. While several embodiments of the invention, together with modifications thereof, have been described in detail herein and illustrated in the accompanying drawings, it will be evident that various further modifications are possible without departing from the scope of the invention. The scope of the claims should not be limited by the preferred embodiments set forth in the examples, but should be given the broadest interpretation consistent with the description as a whole.
16,273
11857857
DETAILED DESCRIPTION OF EMBODIMENTS Turning toFIGS.1a-1f, a portable marker10, useful for sporting activities such as skating and ice hockey is seen. The marker10has an outer, stepped peripheral wall12having a generally vertical lower area14, a generally vertical upper area16of smaller circumference than the lower area, a generally horizontal shoulder18connecting the lower and upper areas of the wall, and a generally horizontal top surface20defining a top opening21of the marker. For purposes herein, the term “generally vertical” shall be understood as being within twenty-five degrees of vertical. As seen best inFIGS.1b,1dand1f, the marker10also has an inner wall24extending from the generally horizontal top surface20and spaced from the vertical upper area16and vertical lower area14of the outer peripheral wall12. The marker10further includes a bottom wall26having a friction surface located between the bottom of the inner wall24and the bottom of the lower area14of the outer peripheral wall12. The inner wall is shown angling in a concave fashion and defining an inner hollow30that may receive the toe of a hockey stick for purposes of manipulation of the marker (as described hereinafter with respect toFIGS.13a-13e). The inner hollow30is shown as a modified frustoconical shape that presents a lower concave area and an upper area that is conical or slightly convex; which may also be called “plunger”-shaped. The inner wall24, outer wall12and bottom wall26together define a second hollow40that are optionally provided with ballast42(seen inFIG.1b). The second hollow40may be divided by interior walls (discussed with reference toFIG.1g) into compartments for the ballast. In one aspect, the bottom wall may be shaped as a ring (disk with central hole), with a ring width similar to the width of the generally horizontal shoulder18of the outer peripheral wall12so that the portable marker10may be stacked on another portable marker. In one embodiment, the bottom wall26is formed separately from the remainder of the portable marker and is glued, mechanically fitted or otherwise attached to the bottom surfaces of the peripheral wall12and the inner wall24. In one embodiment, the peripheral and inner walls of the portable marker10are comprised of a durable, water-resistant material able to withstand sub-freezing temperatures such as plastic, rubber, aluminum, or other suitable material. The bottom wall26may be formed of a different material than the peripheral and inner walls. In embodiments, the bottom wall26is formed of a more rigid material than the materials of the peripheral and inner walls. As will be discussed hereinafter with reference toFIGS.2a-12, the bottom surface of the bottom wall26is provided with an enhanced friction surface. In addition, the bottom wall may be formed of a material that is substantially heavier than the material of the durable, water-resistant material of the body of the marker. In this manner, the bottom wall may itself act as a ballast. In embodiments, the ballast comprises a separate ring of material such as metal. In other embodiments, the ballast comprises pellets of metal such as steel; e.g., metal shot, bearings or sinkers. Thus, as shown in an alternative embodiment ofFIG.1g, the second hollow or void40aformed between the inner wall24aand at least the lower vertical wall14ais divided into compartments by a plurality of divider walls45running therebetween. The pellets42amay be located in various compartments formed in the second hollow40a. According to one aspect, the pellets42acounteract lateral forces exerted upon the marker10and act as a shock-absorbing mechanism for the marker10. In embodiments, the ballast comprises pseudo-plastic fluid such as a carbomer gel. In embodiments, the ballast comprises sand or other granulated mineral. In embodiments, the ballast comprises a water-soluble material such as sodium chloride crystals. In embodiments, loose ballast in the form of solution, pellets, grains or crystals may be contained within a sealed sleeve of polyethylene or other plastic film tubing. In embodiments, the ballast comprises energy-absorbing foam such as cellular material. The ballast may shift in relation to the walls or surfaces of the marker10. In one aspect, the ballast counteracts lateral forces exerted upon the marker and acts as a shock-absorbing mechanism for the marker. In embodiments, such as seen inFIGS.1hand1i, a portable marker10bmay be substantially as shown inFIGS.1a-1f, except that the inner hollow40bbetween the inner wall24band the lower vertical wall14bis at least partially filled with a geometric cellular formation such as a structural lattice41. The lattice may be formed of the same material as the inner and outer walls of the portable marker and may take different formats. Alternatively, the cellular material forming the lattice41may be of a different material which may take a cellular lattice structure. Or, the material may be a foam. It is noted that the structure41may act as a shock-absorbing mechanism for the marker10b. In embodiments, the portable marker (including any ballast) weighs between three-quarters of a pound and two pounds. In embodiments, the portable marker (including any ballast) weighs between one and one and a half pounds. In embodiments, the portable marker is between one inch and eight inches tall. In embodiments, the portable marker is between one-and-a-half inches and three inches tall. In embodiments, a stack of six markers is equal to or less than twenty inches tall. In other embodiments, a stack of six markers is equal to or less than forty-eight inches tall. In embodiments, the height of the vertical lower area14of the peripheral wall is between two and five times the height of the vertical upper area16of the peripheral wall. In embodiments, the vertical lower area14of the peripheral wall12of the portable marker10has an outside diameter of between six and twelve inches. In embodiments, the vertical lower area14of the peripheral wall12of the portable marker10has an outside diameter between twelve and eighteen inches. In embodiments, the top opening21defined by the top surface20of the marker10is at least two-and-a-half inches in diameter. In embodiments, the bottom wall26is chamfered at its inside diameter to help facilitate insertion of a hockey stick blade toe between portable marker and the ice on which it sits. In embodiments, the bottom wall26is chamfered at its outside diameter to help facilitate separation of individual markers from one another in a stack. FIG.3is a diagram of an embodiment of a bottom surface226of a portable marker210having pyramidal points280extending therefrom. The pyramidal points are shown as extending from square bases and extending around the bottom surface226. The pyramidal points may be formed from metal, plastic, or other material. FIGS.2aand2bare diagrams of an embodiment of a bottom surface326of a portable marker310having conical points380. The conical points are shown as extending from cylindrical bases and extending around the bottom surface. The conical points may be formed from metal, plastic, or other material. FIG.4is a diagram of an embodiment of a bottom surface426of a portable marker410having inverse-conical blades480. The inverse-conical blades have a cylindrical outer surface, and an inner surface that tapers so that the material is thicker where it joints the bottom surface426and thinner as it extends away therefrom. The inverse-conical blades may be formed from metal, plastic, or other material. FIG.5is a diagram of another embodiment of a bottom surface526of a portable marker510having a continuous blade580angled ninety degrees (perpendicular) to the ice surface. The blade580has a serrated pattern that extends three-hundred sixty degrees around the bottom surface526. The serrated blade may be formed from metal, plastic, or other material. FIG.6is a diagram of another embodiment of a bottom surface626of a portable marker610having a continuous blade680angled forty-five degrees to the ice surface. The angled blade may be formed from metal, plastic, or other material. FIG.7is a diagram of another embodiment of a bottom surface726of a portable marker710having steel mesh780. FIG.8is a diagram of another embodiment of a bottom surface826of a portable marker810having a textile880. In embodiments, the textile fibers are impregnated with abrasive material such as aluminum oxide or other suitable material. In embodiments, the textile comprises polyolefin or other suitable materials such as may be used in automobile tire socks intended to improve traction on snow and ice. In embodiments, the textile may be a vinyl textile or other suitable wettable material capable of forming a frozen bond with the ice surface after wetting. The vinyl textile or other suitable material may be single ply or multiple ply and incorporate open cell foam or similar material capable of absorbing, retaining and expressing liquid (e.g., water). FIG.9is a diagram of another embodiment of a bottom surface926of a portable marker910where the bottom surface comprises perforated steel sheet similar to such as may be used in a kitchen grating utensil. FIG.10is a diagram of another embodiment of a bottom surface1126of a portable marker1110where the bottom surface includes a plurality of straight blades1180at ninety degrees (perpendicular) to the ice surface. Blades1180may be formed from metal, plastic, or other material. FIG.11is a diagram of another embodiment of a bottom surface1326of a portable marker1310having an abrasive tread of surface1380. The abrasive tread1380may be a suitable pressure sensitive anti-slip tread that is readily obtainable with a covered adhesive backing, such as 3M™ Safety-Walk™710coarse tapes and treads. When the cover is taken off the adhesive backing, the adhesive backing is applied to the bottom surface1326of the marker1310. In other embodiments, the abrasive tread may comprise epoxy, acrylic, rubber or other adhesives impregnated with silicon carbide, aluminum oxide, allyl diglycol carbonate or other suitable materials. FIG.12is a diagram of another embodiment of a bottom surface1426of a portable marker1410having a traction-enhanced rubber compound element1480such as may be used on traction shoe outsoles for use on ice or in snow. The traction-enhanced rubber compound may contain abrasive particles such as walnut shell, silicon carbide, aluminum oxide, garnet and other materials. As suggested byFIGS.2a-12, in embodiments, the bottom friction surface may be formed in many different arrangements and from many different materials. The friction surface may be formed from an abrasive material, an adhesive-backed safety tread, or a steel mesh, a perforated steel sheet, a textile, a traction-enhanced rubber compound, or one or a series of vertically-oriented, or angled points, blades or edges that engage the surface or the ice, or other materials or arrangements. In embodiments, the bottom friction surface assumes a ring-shaped arrangement. In embodiments, the portable marker will not be laterally displaced when located on ice and subjected to an external force of 2 newtons applied along a horizontal radial axis towards the center of the marker, halfway up the outer peripheral wall at the corresponding point of tangency. Turning now toFIGS.13a-13c, a method of lifting a portable marker from the ice with a hockey stick is illustrated. InFIG.13a, a portable marker1510with a structure the same or similar to marker10ofFIGS.1a-1f(orFIG.1g), and with a bottom friction surface such as shown in any ofFIGS.2a-12is shown with the toe1550of a blade1555of a hockey stick1560shown extending through the top opening1521of the marker1510and engaging the edge or bottom surface of a top wall1520, or the inner wall of the marker1510. By torqueing the shaft1561(FIGS.13band13c) of the hockey stick, the marker may be flipped along the blade1555and onto the shaft1561of the hockey stick1560. If the marker does not easily move onto the blade and shaft, the toe of the hockey stick can be pushed further along the inner surface of the marker1510until it reaches the bottom surface of the marker, and can be manipulated further to be forced between the bottom surface of the marker1510and the ice on which it sits, thereby disengaging the marker from the ice. At that point, the marker may be lifted (flipped) along the blade and onto the shaft of the hockey stick1560. A stack of two or more markers may be lifted from the ice and loaded onto the shaft of a hockey stick by the same method shown inFIGS.13a-13c. As seen inFIGS.13dand13e, a series or stack of markers1510may be carried on the shaft1561of the hockey stick1560. Thus, by raising the forward blade end of the hockey stick above the handle end, the marker will slide towards the handle1562where it may be secured between a first hand1575aof the user1580gripping the top end (handle) of the shaft of the stick and the other hand1575bgripping the shaft of the stick at a location forward of the marker. By moving to the respective (second, third, . . . ) locations of the markers on the ice, placing the toe of the hockey stick blade into the top openings of the respective markers, lifting (e.g., flipping) each respective marker as discussed with reference toFIGS.13a-13c, and raising the forward end of the shaft to secure each marker between the hands as discussed above, the markers1510may be gathered on the shaft of the hockey stick and easily transported together. One or more markers on the shaft of the hockey stick and located between hands1575aand1575bmay be unloaded from the shaft by moving the marker or markers to a position forward of hand1575band tilting the forward blade end of the hockey stick below the handle end. The marker or markers will slide towards the blade end where they may be deposited on the ice by torqueing the shaft of the hockey stick until the toe of the hockey stick blade points downward and the marker or markers slide off the blade. The marker may be slid into position by inserting the toe of a blade of a hockey stick through the top opening of the marker and engaging the edge or bottom surface of a top wall, or the inner wall of the marker and moving the toe of the blade laterally to a new location on the ice. Markers may also be unloaded by sliding a series or stack contained on the shaft off the handle end of the hockey stick. Another embodiment of a marker1610is seen inFIGS.14aand14b. Marker1610is substantially the same as marker10ofFIGS.1a-1f, orFIG.1g, with a stepped peripheral wall1612with a generally vertical lower area1614, a generally vertical upper area1616, a generally horizontal shoulder1618, a generally horizontal top surface1620, a friction-enhanced bottom wall1626, an inner wall (not shown), etc., except that the generally horizontal top surface1620defines aligned notches1620a, and the horizontal shoulder1618extends inward at the location of the notches. InFIGS.14aand14b, two notches1620aare provided and arranged such that a hockey stick may be laid into the notches. In other embodiments, top surface1620may define additional notches. FIG.15is a perspective view of three portable markers1610ofFIGS.14aand14baligned with a shaft1561of a hockey stick1560extending through the respective notches1620aof the markers. The arranged markers and suspended hockey stick shaft may be used for exercises (e.g., “stick-handling”) where the puck is passed around the markers and beneath the shaft to practice on-ice stick handling maneuvers (e.g., “dangling”). FIGS.16a-16bshow another portable marker1810. The marker1810is similar in various respects to marker10ofFIGS.1a-1fand in many respects to marker1610ofFIGS.14a-14b. Marker1810has an outer generally vertical peripheral wall1814, a generally horizontal shoulder1818defining a top opening1821of the marker, and a top ridge or series of extensions or protrusions1820that may define notches1820atherebetween. As seen best inFIG.16a, the marker1810also has an inner wall1824extending from the generally horizontal shoulder1818and spaced from the outer peripheral wall1814. The marker1810further includes a bottom wall1826having a friction surface located between the bottom of the inner wall1824and the bottom of the outer peripheral wall1814. The inner wall is shown angling in a concave fashion and defining an inner hollow1830that may receive the toe of a hockey stick for purposes of manipulation of the marker (as previously described). The inner hollow1830is shown as a modified frustoconical shape that presents a lower concave area and an upper area that is conical or slightly convex; which may also be called “plunger”-shaped. The inner wall1824, outer wall1814and bottom wall1826together define a second hollow1840that are optionally provided with ballast1842. The second hollow1840may be divided by interior walls into compartments for the ballast. In one aspect, the bottom wall may be shaped as a ring (disk with central hole), with a ring width similar to the width of the generally horizontal shoulder1818outside of top protrusions1820so that the portable marker1810may be stacked on another similar portable marker. In one embodiment, the bottom wall1826is formed separately from the remainder of the portable marker and is glued, mechanically fitted or otherwise attached to the bottom surfaces of the peripheral wall1814and the inner wall1824. According to one aspect, the provided portable marker resists lateral displacement from its location on a surface such as ice having low static and dynamic coefficients of friction. In one aspect, the provided portable marker is not easily upended from its prearranged orientation on ice. In one aspect, the provided portable marker is easily stackable. In one aspect, the provided portable marker is relatively light in weight and compact (relative to the markers of the prior art) and therefore easily handled and stored. In one aspect, the provided portable marker may be separated from a stack of identical portable markers using one hand. In one aspect, the provided portable marker may be placed upon, positioned, and removed by a user from an ice surface via the use of a hockey stick while maintaining an erect posture. In one aspect, the provided portable marker or a stack of two or more markers may be quickly and easily placed upon and removed from an ice surface by inserting the toe of a hockey stick blade through a central opening in the marker and employing principles of leverage to respectively unload or gather the marker or stack of markers from or onto the shaft of the hockey stick. According to another embodiment, a portable marker may be made of a foam, such as EVA, urethane, latex, or other suitable material upper portion with a central opening for receiving a hockey stick as described above with reference to the other embodiments, and a weighted base having enhanced friction qualities, with the foam upper portion and base being shaped so that the marker is stackable such that a group of six portable markers may be carried on a hockey stick as previously described. The weight of the base, and the enhanced friction aspects of the base are chosen so that the marker will not be laterally displaced when located on ice and subjected to an external force of 2 newtons applied along a horizontal radial axis towards the center of the marker, halfway up the outer peripheral wall at the corresponding point of tangency. FIGS.17a-17e,18a-18c,FIGS.19a-19c, andFIGS.20a-20eprovide details of yet another embodiment of a portable marker2310, whereFIGS.17a-17eare respectively a side view, a cross-sectional view, a top view, a bottom view, and a perspective view of marker2310,FIGS.18a-18care respectively a top perspective view, a side view, and a top view of marker2310with a portion of an insert2375lifted,FIGS.19a-19care respectively an exploded side view, an exploded cross-sectional view, and an exploded perspective view of the marker of2310, andFIGS.20a-20eare detailed views ofFIG.19c. Marker2310is comprised of a durable, water-resistant material, able to withstand sub-freezing temperatures and includes a substantially vertical outer wall2314, an inner wall,2324, a top surface or shoulder2318extending from the outer wall to the inner wall, a bottom friction surface2326extending from the bottom of the inner wall to the bottom of the outer wall, and an insert2375including at least a portion of a ring and having upper wall elements2320extending upward therefrom. The inner wall2324defines an inner hollow2330that may receive the toe of a hockey stick for purposes of manipulation of the marker (as previously described). The inner hollow2330is shown as a modified frustoconical shape that presents a lower concave area and an upper area that is conical or slightly convex; which may also be called “plunger”-shaped. The lower portion of the marker2310including the inner and outer walls may be solid (as shown), or may form a second hollow between the inner and outer walls as previously described, which may be provided with ballast as previously described. Also, a circumferential ballast may be added to the bottom of the lower portion of the marker2310as described with respect to the embodiments shown inFIGS.21a,22,23, and26d. As best seen inFIGS.19a,19b, and19c, the insert2375includes a two-part ring2377with a smaller part2377aand a larger part2377b, and with the larger part2377bhaving one or more living hinges2377cdefined therein, and with upper wall elements2320extending upward from both parts of the ring2377. In addition, as shown best inFIGS.17c,18a,18c,19c, and20d, part2377aof the ring defines arcuate cuts2378a,2378bon either side of an upper wall element2320which are in the shape of an arrow head, and part2377bof the ring includes two arrow-heads2379a,2379bon either end of part2377bthat are generally directed toward each other. Adjacent the cuts2378a,2378b, the ring part2377aincludes locking elements2378c(FIG.20d) with a larger base2378dhelping define a flat ledge2378e. As seen best inFIGS.19cand20a, the top wall2318(FIG.19b) or shoulder of marker2310defines a receiving area2382for the ring2377. Receiving area2382has a bottom surface2382a, and rims2382bdefining notches2382c. The receiving area2382receives and engages the ring2377of the insert2375such that the ring portion2377and the receiving area2382have a snap fit engagement (with rims2382bsitting above the flat ring2377). Ring portion2377ais shown with nubs or protrusion2385that align with notches2382cin the rims such that the smaller ring portion2377ais fixed in a specific location in the receiving area2382. When ring portion2377ais pushed into the receiving area2382, the ring portion2377awill force the rims2382bto deform between notches2382cso that the ring portion2377acan snap into place. However, because the ring portion2377aincludes locking elements2378cwith flat ledges2378e, the ring portion2377ashould be fixed in place with locking elements2378clocated beneath a flat underside surface of the corresponding portion of rim2382b. As seen best inFIG.19c,FIG.20aandFIG.20b, ring portion2377balso includes a nub or protrusion2385that is intended to align with a notch2382cin a rim2382blocated at an upper wall element2320marked B inFIG.19c. Adjacent the protrusion, ring portion2377bincludes a locking element2378chaving a flat ledge2378ewhich acts to lock a middle portion of ring portion2377bin place across from ring portion2377a. However, ring portion2377bextends around an arc of about 280°, and except for the middle locking portion presents rounded rims2390(on both the inner and outer sides) that may push past rims2382bof upper surface2318and be held in place. Because rims2390are rounded, they may be more easily extracted from the rounded underside surface of corresponding portions of rims2382b. It is also noted that ring portion2377bincludes a plurality of living hinges2377c. With the provided arrangement, ring portion2377bmay be considered to include a fixed area at upper wall element2320marked B and two arm portions2399a,2399bextending from living hinges2377csurrounding respective sides of upper wall element2320(marked B). It is possible to lift arms2399a,2399bfrom any of the living hinge locations above the rims2382bas seen inFIGS.18a-18c(only arm2399ashown lifted) so that the arrows2379a,2379bextend above the top surface2318and point in their respective directions and are easily viewable. The inner wall2324defines an inner hollow2330that may receive the toe of a hockey stick for purposes of manipulation of the marker. The inner wall and outer wall may be the inner and outer walls of a solid frustoconical or tapered body, or may be spaced from each other to define a second hollow that may be provided with ballast as described above with respect to other embodiments. The portable marker2310may be stacked on another portable marker2310. In embodiments, the insert2375, or a portion thereof, may be transparent to serve as a window covering text or graphics (not shown) inserted beneath the insert2375and on top of the top wall2318. Generally, the arrows2379c,2379dand a portion of arms2399a,2399badjacent the arrows will be visible and not be transparent so that they may serve to direct skaters (or others partaking in sporting activities if the marker2310is used for other sports such as field-hockey, lacrosse, soccer, etc.) If desired, one of the arms may be colored red and the other colored green for directing activities in one direction or another. Accordingly, generally, the insert2375may be comprise two or more pieces of different colors or designs to serve as directional or instructional indicators. FIGS.21a-21cprovide details of yet another embodiment of a portable marker2410, whereFIGS.21a-21care respectively a cross-sectional view, a top perspective view, a bottom perspective view of marker2410. Marker2410is comprised of a durable, water-resistant material, able to withstand sub-freezing temperatures and includes an outer wall2414having a generally vertical upper area2414aand a generally vertical lower area2414bconnected to the upper area by a shoulder2414c. The lower area2414bincludes an annular channel2454that is configured to house ballast2442, as described in greater detail hereinbelow. The annular channel2454is defined by an outer peripheral surface2454a, an inner surface2454bhaving a smaller diameter than the outer peripheral surface2454a, and the shoulder2414cextending between the outer peripheral surface2454aand the inner surface2454b. In the embodiment, the outer peripheral surface2454a, inner surface2454b, and the shoulder2414care unitary with the outer wall2414. The upper area2414aof the outer wall2414defines a plurality of circumferentially spaced wall elements2420. The wall elements2420define notches2420atherebetween that are configured to receive the shaft of a hockey stick. The marker2410is also comprised of a bottom friction surface2426coupled to the bottom of the lower area2414bof the outer wall2414. In the embodiment shown inFIG.21a, the friction surface2426is coupled to the lower area2414bof the outer wall2414via an annular ballast channel2450, which is received into and secured to the annular channel2454formed in the lower area2414bof the outer wall2414. The ballast channel2450and the annular channel2454may be configured to securely connect in various ways, including a press fit, a snap fit, a thread fit, a weld, or with glue. The marker2410is also comprised of an inner wall2424and a top surface of shoulder2418extending from the upper area2414aof the outer wall24141to the inner wall2424. The inner wall2424is circumferentially spaced from an inner surface of the outer wall2414. The inner wall2424defines a central opening2421and an inner hollow2430that may receive the toe of a hockey stick for purposes of manipulation of the marker (as previously described). FIGS.22and23show alternate embodiments of the marker2410. InFIG.22, a portion of the outer shell2414of marker2410is shown connected to a modified ballast channel2450′ having a friction surface2426′ that is integrally formed into a bottom wall of the channel2450′. For example, the friction surface2426′ may be molded with the channel2450′. InFIG.23an alternate marker2410′ has a modified outer shell2414′ that has an annular channel2454′ filled with a material2460having a lower or bottom surface that contains embedded granules2462. The material2460may be formed from a liquid that hardens or otherwise cures in the annular channel2454′ or in a mold. The granules may comprise a high friction material such as silicon carbide, aluminum oxide, allyl diglycol carbonate or other suitable materials and can be placed or set onto the surface of the liquid so that the granules are embedded into the surface to provide a rough, high friction surface texture to resist sliding on ice. FIG.24ais a top perspective view of another embodiment of a portable marker2510, which is substantially the same as marker2410, but is modified as described hereinbelow. InFIGS.24a-24cand25a-25c, elements corresponding to marker2410are incremented by “100”. The marker2510includes one or more different means for securing an elevated shaft of a hockey stick within opposed pairs of upper area notches2520a′,2520a″,2520a′″, and2520a″″. While four different securing means are shown, it is noted that any or all of the securing means may be the same or different.FIG.24ashow a shaft of a hockey stick secured within a first notch2520a′ and a second notch2520a″. An elastic strap2570extends across the first notch2520a′ and a moveable (slidable in a vertical direction), inelastic strap2572extends across the second notch2520a″, which is diametrically opposite the first notch2520a′. As shown in greater detail inFIG.24c, the elastic strap2570extends from circumferentially spaced wall elements2520defining the first notch2520a′. Also, the slidable strap2572extends from circumferentially spaced wall elements2520defining the second notch2520a″. In embodiments, the slidable strap2572may be spring biased to clamp down on the shaft of the hockey stick disposed between the slidable strap and the second notch. FIGS.25a-25cshow the hockey stick disposed in a third notch2520a′″ and a fourth notch2520a″″. The third notch2520a′″ includes an elastic, flexible liner2574that lines the third notch2520a′″ and is configured to compress against and grip the outer surface of the shaft of the hockey stick to prevent the shaft from coming out of the third notch2520a′″. The liner2574may be formed from a high friction material such as rubber. The fourth notch2520a″″ includes gripping protrusions2576that extend circumferentially from the sidewalls of the wall elements2520that define the fourth notch2520a″″. The protrusions2576may be formed of high friction material, such as rubber, or they may be integrally molded into the notch-side walls of elements2520. FIGS.26aand26bare respectively a top perspective view and a detail view of another embodiment of a portable marker2610with shallow upper area notches and straps for securing a shaft.FIG.26cis a sectional assembly view of the portable marker shown inFIGS.26aand26b. InFIGS.26a-26celements corresponding to those of marker2410are shown incremented by “200”. Marker2610is comprised of a durable, water-resistant material, able to withstand sub-freezing temperatures and includes an outer wall2614having a generally vertical upper area2614aand a generally vertical lower area2614bconnected to the upper area by generally horizontal shoulder2614c. The lower area2414bdefines an annular channel2654that is configured to house ballast2642, as described in greater detail hereinbelow. The annular channel2654is defined by an outer peripheral surface2654a, an inner surface2654bhaving a smaller diameter than the outer peripheral surface2654a, the shoulder2614c, and an bottom annular surface2614dof the lower area2614b. The marker2610also includes a bottom friction surface2626coupled to the bottom surface2614d. The outer peripheral surface2654aand the shoulder2614care integrally formed as a snap fit ring2660having an L-shaped profile. The inner surface2654bof the annular channel2654and the bottom surface2614dare integrally formed and have snap fit connectors that are configured to snap together with mating snap fit connectors of the ring2660to enclose the ballast2642. The upper area2614aof the outer wall2614defines a plurality of circumferentially spaced wall elements2620. The wall elements2620define notches2620atherebetween that are configured to receive the shaft of a hockey stick. Also, the upper area2614adefines a plurality of shallow recesses2680that are configured to receive hook and loop fasteners2682. The corners of each notch2620adefine radial slots2614ethrough the outer wall2614. The marker2610includes a strap2684having a central portion2684aand side flaps2684bthat extend from the central portion2684a. The strap2684is connected to the outer shell2614by disposing the central portion2684aunder the notch2620aand routing the flaps2684bthrough the slots2614e. Each flap2684bhas a hook and loop fastener2686attached to opposite sides of the flap2684b, which is configured to align with and attach to the hook and loop fasteners2682.FIGS.26aand26bshow the straps in a first open position lying flat.FIG.26dshows the straps in a second configuration in which the flaps are connected together by their hook and loop fasteners around a shaft of a hockey stick disposed in diametrically opposed notches2620a. The marker2610is also comprised of an inner wall2624and a top surface of shoulder2618extending from the upper area2614aof the outer wall2614to the inner wall2624. The inner wall2624is circumferentially spaced from an inner surface of the outer wall2614. The inner wall2624defines a central opening2621and an inner hollow2630that may receive the toe of a hockey stick for purposes of manipulation of the marker (as previously described). FIGS.26cand26eshow an optional removable directional collar2690that is shown attached to the marker2610. The collar2690is shown attached to the upper portion2614aof the outer wall2614. In the example shown, the collar2690is seated on a frustoconical surface of the upper portion2614aof the outer wall2614. The collar2690may have cutouts in the shape of arrows, as shown inFIGS.26cand26eso that when the collar2690is attached to the marker2610, the color of the underlying outer wall2614is visible. Preferably, the color of the collar2690is distinguishable from the color of the outer wall2614so that the arrows are visible. Alternatively, the collar may be a solid flexible ring that is printed, painted, or otherwise bears directional markings, such as arrows. The collar2690is flexible so that it can be inverted inside out to change the directionality of the markings on the marker2610. As an alternative to the flexible removable collar2690described above with fixed indicia or markings, another collar may be attached in place of the flexible collar that has a dry-erase or other erasable writing or marking surface. A user can write and re-write directional or any other markings on the writing surface with erasable dry-erase markers. In embodiments, any of the markers described herein may include magnets or other couplers to couple the markers to other structures, such as a steel frame of hockey goal above the ice surface. Such positioning can permit the markers to be used for hockey target practice, either presenting locations at which a puck should be aimed (e.g., a top corner of the goal post), or presenting locations where a hockey goalie or defenseman is expected to block a shot (e.g., at the foot of the goal). For example, in one embodiment, magnets may be located in a lower area of the peripheral wall of a portable marker and such magnets may be coupled to the steel frame of a hockey goal at locations above the ice surface. In other embodiments the peripheral wall may be provided with hook and loop fastener elements (e.g., VELCRO®—a trademark of Velcro BVBA). Thus, buttons or strips of hook fasteners could be located at one, two, or more locations around the periphery of the lower area of the outer wall, and buttons or strips of loop fasteners could be located at one, two, or more locations around the periphery of the lower area of the outer wall so that the hook fasteners or loop fasteners of one marker could engage the loop fasteners or hook fasteners another marker. There have been described and illustrated herein several embodiments of a portable marker and a method of its use. While particular embodiments have been described, it is not intended that the invention be limited thereto, as it is intended that the invention be as broad in scope as the art will allow and that the specification be read likewise. Thus, while particular materials have been disclosed, it will be appreciated that other materials may be used as well. Also, while portable markers having a round cross-section were described, it will be appreciated that the markers could be octagonal, square, or of other cross-section. Accordingly, the term “circumference” as used herein is to be understood broadly to refer to the periphery of the marker, such that the circumference of a square marker would be equal to four times the measure of one side. It will therefore be appreciated by those skilled in the art that yet other modifications could be made to the provided invention without deviating from its spirit and scope as claimed.
37,667
11857858
DETAILED DESCRIPTION Aspects of the present disclosure are directed to a device having a set of tubular bodies configured to be assembled as a unit and easily separable. The device will be described herein in in one exemplary context of a sports training device, and more specifically, as a training relay baton. It will be understood that aspects of the disclosure can have general applicability, including in other sports, training, or modeling environments. The relay baton in one implementation can be utilized in a sports environment, for example as part of a relay race in which a first user passes the baton to a second user while running, jumping, or the like. A typical relay baton is a single piece baton that can be passed from one runner to another. A successful handoff of the baton without dropping or fumbling is key to minimizing the time needed to complete the race. Aspects of the disclosure provide for a training relay baton that is easily separated during handoff, for example during a training race, such that each participant in the handoff is able to analyze and improve technique regarding grip, timing, aim, or the like. All directional references (e.g., radial, axial, upper, lower, upward, downward, left, right, lateral, front, back, top, bottom, above, below, vertical, horizontal, clockwise, counterclockwise) are only used for identification purposes to aid the reader's understanding of the disclosure, and do not create limitations, particularly as to the position, orientation, or use thereof. Connection references (e.g., attached, coupled, connected, and joined) are to be construed broadly and can include intermediate members between a collection of elements and relative movement between elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and in fixed relation to each other. In addition, as used herein, “a set” of elements can include any number of the respective elements, including only one element. Furthermore, as used herein, a “tubular” element will refer to any element having a generally elongated geometric profile. Such “tubular” elements can have a cross-sectional profile that is round, square, triangular, rounded with one or more corners, symmetric, asymmetric, or irregular, in non-limiting examples. Such “tubular” elements can also be hollow, solid, or a combination thereof. The exemplary drawings are for purposes of illustration only and the dimensions, positions, order and relative sizes reflected in the drawings attached hereto can vary. FIG.1illustrates one exemplary separable device in the form of a training relay baton1, with a partial portion of the baton cut away to show the inside of the baton1. The relay baton1includes a set of tubular bodies. In the example shown, the relay baton1includes a first tubular body10and a second tubular body20. Any number of tubular bodies can be included in the relay baton1, including three or more tubular bodies. The first and second tubular bodies10,20can be formed of any suitable material, including aluminum, plastic, carbon fiber, or the like, or combinations thereof. The first tubular body10extends between a first distal end11and a first coupling end12. The first tubular body10also defines a first outer surface13and a first inner surface14. The second tubular body20extends between a second distal end21and a second coupling end22and defines a second outer surface23and a second inner surface24. While the first distal end11and second distal end21are shown as having a rolled finish, this is for illustrative purposes only. The first and second distal ends11,21can have any geometry or profile, including flat edges, bevels, curves, or the like. When assembled, the second tubular body20can at least partially surround the first tubular body10such that the first tubular body10is at least partially received within the second tubular body20. In one non-limiting example, the bodies10,20can be coupled by a press-fit or friction-fit mechanism between the first inner surface14and the second outer surface23at the respective first and second coupling ends12,22. Additionally or alternatively, the tubular bodies10,20can be coupled by any suitable mechanical fastener or chemical fastener, including pins, bolts, screws, latch and catch mechanisms, adhesives, or the like, in non-limiting examples. Turning toFIG.2, a side cross-sectional view of the first tubular body10is shown. A first internal width15can be defined within the first tubular body10. The first internal width15can be variable. In one example, the first internal width15can be constant for a portion of the length of the tubular body10and transition to a decrease in a direction toward the first coupling end12. In other examples, the first internal width15can be constant or increasing within the first tubular body10. The first outer surface13of the first tubular body10can also include at least one tapered region. In the example shown, the first outer surface13includes a first tapered region16and a second tapered region17. The first tapered region16can extend fully to the first coupling end12in one example. The first tapered region16can also abut the second tapered region17, though this need not be the case. Optionally, the first inner surface14can have a tapered or angled geometry. The first and second tapered regions16,17can collectively form an overall tapered region18along the first outer surface13. It is contemplated that the first and second tapered regions16,17can have differing slopes. In this manner the first outer surface13can include a non-continuous taper. A first wall thickness19can be defined between the first outer surface13and first inner surface14. In the illustrated example, the first wall thickness19is variable along the first coupling end12. It is contemplated that the first tapered region16or the second tapered region17can be at least partially formed by a varying wall thickness19. Additionally or alternatively, the first tapered region16or the second tapered region17can be at least partially formed by a contour or angle of the first inner surface14, or a constant wall thickness19, or combinations thereof. The first tubular body10can have any suitable dimension, sizing, or relative proportion. In non-limiting examples, the first wall thickness19can be between 1 mm and 3 mm, a length of the first coupling end12can be between 10-30% an overall length of the first tubular body10, and the first internal width15can be between 20 mm and 40 mm. FIG.3illustrates a side cross-sectional view of the second tubular body20. A second internal width25can be defined within the second tubular body20. The second internal width25can be variable. In one example, the second internal width25can be constant for a portion of the length of the tubular body20and transition to a decrease in a direction toward the second coupling end22. In other examples, the second internal width25can be constant or increasing within the second tubular body20. The second inner surface24of the second tubular body20can also include at least one tapered region. In the example shown, a third tapered region26can be defined along the second inner surface24. The third tapered region26can be located at the second coupling end22as shown. A second wall thickness27can be defined between the second outer surface23and second inner surface24. In the illustrated example, the second wall thickness27decreases in a direction toward the second coupling end22. In this manner, the third tapered region26can be at least partially formed by a decreasing wall thickness27. Additionally or alternatively, the third tapered region26can be at least partially formed by a contour or angle of the second inner surface24, or a constant wall thickness27, or a variable wall thickness27, or combinations thereof. The second tubular body10can have any suitable dimension, sizing, or relative proportion. In non-limiting examples, the second wall thickness27can be between 1 mm and 3 mm, or a length of the second coupling end22can be between 10-30% an overall length of the second tubular body20, or the second internal width25can be between 20 mm and 40 mm. Turning toFIG.4, the assembled relay baton1is illustrated in cross-section at the first and second coupling ends12,22. The sizes or thicknesses of the walls of the first and second tubular bodies10,20are exaggerated for visual clarity. When assembled, the first tubular body10can be coaxial with the second tubular body20. The third tapered region26of the second tubular body20can radially overlie at least one tapered region of the first tubular body10. In the example shown, the third tapered region26radially overlies both the first tapered region16and the second tapered region17though this need not be the case. The first tapered region16can define a first angle31with respect to a longitudinal axis40extending through the relay baton1. The second tapered region17can define a second angle32with respect to the axis40. In the example shown, the first angle31differs from the second angle32. More specifically, the first angle31can be positive and the second angle32can be negative with respect to the axis40. The second angle32can also be greater than the first angle31. In addition, the overall tapered region18can define an overall angle34with respect to the axis40. The overall angle34can result from the combination of the first angle31and the second angle32. In addition, the third tapered region26can define a third angle33with respect to the longitudinal axis40. The first outer surface13of the first tubular body10can abut the second inner surface24of the second tubular body20when assembled. The third tapered region26can align with the overall tapered region18. Put another way, the third angle33formed by the second tubular body20can be equal to the overall angle34formed by the first tubular body10. In addition, the first outer surface13can form discrete points of contact with the second inner surface24. A gap50can be formed between the first outer surface13and the second inner surface24. In the example shown illustrating one possible implementation, the first tapered region16contacts or abuts the second inner surface24at a first point of contact41, and the second tapered region17contacts or abuts the second inner surface24at a second point of contact42. The gap50can be formed by the relative positioning of the first, second, and third tapered regions16,17,26. More specifically, the gap50can be formed by the first tapered region16being directed away from the first inner surface14and the second tapered region17being directed toward the first inner surface14. The gap50can extend at least between the first and second points of contact41,42. A breakaway interface60for the relay baton1can be at least partially defined by the first and second points of contact41,42and gap50. For example, friction at the first and second points of contact41,42between the first and second coupling ends12,22can hold the ends12,22together in assembly while being spatially limited to the first point of contact41and second point of contact42. A perturbation, rotation, or relative movement of the first tubular body10compared to the second tubular body20can cause the first and second tubular bodies10,20to separate. The breakaway interface60can provide for ease of separation of the bodies10,20while still allowing sufficient coupling to use the assembled relay baton1as a singular unit. In this manner, the non-constant taper of the first tubular body10abutting or radially overlying a constant taper of the second tubular body10can form the breakaway interface60. In addition, the gap50extending at least between the first and second points of contact41,42can at least partially define the breakaway interface60. Additionally or alternatively, the breakaway interface60can include multiple gaps between the first tubular body10and the second tubular body20. In such a case, the first outer surface13can form multiple, discrete points of contact with the second inner surface24thereby forming multiple gaps therebetween. Any number of gaps can be provided. In one non-limiting example of operation, a first user can hold or grasp the relay baton1near one end of the baton1, such as the first tubular body10. The first user can perform a practice or training relay race in which the baton1is held by the first user while running toward a stationary second user. The first user can hold out the baton1while grasping the first tubular body10. During a handoff operation, the second user can grasp the second tubular body20and begin running while the first user stops. The breakaway interface60between the first and second tubular bodies10,20can provide for ease in separation of the tubular bodies10,20during the handoff operation. After the handoff is completed, the first and second user can analyze elements of the training race such as grip location, speed, coordination or the like based on separation of the relay baton1into its multiple elements. Additionally or alternatively, the first outer surface of the first tubular body can have a curved, ridged, or sinusoidal geometric profile forming a breakaway interface between the first and second tubular bodies having multiple gaps. Additionally or alternatively, either or both of the first tubular body and the second tubular body can be solid. In one example, the second coupling end can have a solid form and provided with a slot configured to receive the first coupling end. In another example, the second coupling end can be hollow with a remainder of the second tubular body being solid. In still another example, the first tubular body or the second tubular body can have both hollow and solid interior portions. Such examples can provide for customized weight or balancing of the first and second tubular bodies. Additionally or alternatively, three or more tubular bodies as described herein can be coupled together to form a separable component. In such a case, aspects of the disclosure can provide for one or multiple breakaway interfaces between any or all of the tubular bodies forming the assembled component. In one example, two tubular bodies can be rigidly secured together and coupled to a third tubular body by way of a breakaway interface. In another example, multiple tubular bodies can each be connected to one another by way of a breakaway interface, wherein an applied force or other perturbation at some location along the assembled component can be made visible or otherwise indicated by way of separation between adjacent tubular bodies at that location. Aspects of the disclosure provide for a releasable coupling between assembled bodies by way of reduced friction between such assembled bodies, including for use in a relay baton for training or analysis purposes. In the context of a relay baton, the reduction in surface contact provides for a breakaway interface and allows for improved, specific focus on handoff technique during training compared to traditional relay batons. Such a separable component can also be utilized in a variety of environments, including other physical modeling, training, or simulation environments where ease in separation and assembly improves process efficiencies. Many other possible aspects and configurations in addition to that shown in the above figures are contemplated by the present disclosure. To the extent not already described, the different features and structures of the various aspects can be used in combination with each other as desired. That one feature is not illustrated in all of the aspects is not meant to be construed that it is not included, but is done for brevity of description. Thus, the various features of the different aspects can be mixed and matched as desired to form new aspects of the disclosure, whether or not the new aspects are expressly described. All combinations or permutations of features described herein are covered by this disclosure. This written description uses examples to disclose aspects of the disclosure, including the best mode, and also to enable any person skilled in the art to practice the aspects of the disclosure, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the disclosure is defined by the claims, and can include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.
16,787
11857859
DETAILED DESCRIPTION Referring toFIGS.1and1A, an adaptive basketball shooting device10according to one implementation includes an outer frame12and, within the frame, a conveyor belt14configured to be actuated by a user to retrieve a ball from a surface on which the wheelchair is positioned. The frame12is mounted on wheels16, which support the weight of the device and allow smooth movement in any direction, and includes a platform17that is configured to be readily attached to a user's wheelchair. The wheels16minimize stress on the wheelchair and maintain fluid motion while moving across the court. The wheels also prevent the chair from leaning to one side or tipping, keeping the occupant safe, level and comfortable, and make it easier to attach the device10to a variety of chairs and transport the device when it is not mounted on a wheelchair. In the embodiment shown the device includes four wheels which all swivel and are made of a non-marking material to prevent scuffing of a court. As will be discussed with reference toFIGS.2-2B, a portion of the frame12defines a track that controls vertical motion of the ball by the conveyor belt14until the ball reaches a position where it can be engaged by a shooter wheel18. The shooter wheel18, in cooperation with an upper portion of the frame, is configured to eject the ball upward from the device when actuated by the user. A control panel20(FIG.1A) is provided to allow the user to control operation of the conveyor belt and shooter wheel to shoot a basket. The conveyor belt, shooter wheel and controller may be powered by a portable power supply (not shown), e.g., a battery, that may be mounted on the platform17or otherwise attached to the frame12or, alternatively, by the power supply of a powered wheelchair. The various components of the device and the manner in which the device is used will be discussed in detail below. FIGS.2-2Billustrate, from various angles, the frame12that is used in the shooting device shown inFIGS.1-1A. Frame12is formed by two outer plates24and26, which are connected by a plurality of connecting rods28, and which together define a lower transport section30and an upper shooting section32. Interposed between the outer plates and held in predetermined lateral positions by the rods28are a pair of inner rails34and36. These rails act as guide rails, as will be discussed below, and have lower sections34A,36A (FIG.2) that correspond to the lower transport section of the frame, and upper arcuate sections34B,36B that correspond to the upper shooting section of the frame. The lower sections34A,36A include arcuate surfaces37A,39A (best seen inFIGS.2and5) that initiate contact between the ball and the lowermost surface of the conveyor belt14and help to guide the ball from the surface into contact with the conveyor belt. Referring toFIG.5, the distance D1from the lowermost surface of the wheel48that drives the conveyor belt14to the lower edge of the arcuate surfaces37A,39A is selected so that the conveyor belt14will contact the ball on the ground or floor with sufficient pressure to draw the ball up into the frame. For example, D1may be from about 7.5 to 9.5 inches. Frame12also includes a pair of parallel opposed support plates38,40that support the shooter wheel, the drive system for the shooter wheel (shown inFIG.4and described below), and the wheels that drive and position the conveyor belt (shown inFIG.3and described below). As shown inFIG.5, the spacing between the support plates38,40and the opposed inner rails34,36, is selected so that the distance D2between the conveyor belt surface and the opposed surfaces of lower sections34A,36A that contact the ball will be substantially equal to the distance D3between the surface of the shooter wheel18and the opposed surfaces of upper sections34B,36B that contact the ball. If the device is to be used with a standard-sized basketball, D2and D3are between about 7.5 and 9 inches, for example between 8.25 inches and 8.5 inches. This spacing distance, which is less than the diameter of a standard-sized basketball (approximately 9.5 inches depending in inflation) ensures that sufficient pressure is applied between the conveyor belt and the inner rails34,36to cause the ball to travel vertically within the frame. Distance D2is also selected to apply sufficient pressure to the ball to hold the ball in a desired vertical positioning within the frame when the conveyor belt14is switched off during upward movement of the ball. This allows the user to use the conveyor to pick up the ball at a first location and then drive to a second location to shoot the ball with the ball securely held in place within the frame. Referring toFIG.2B, the spacing S1between rails34,36and the spacing S2between outer plates24and26are also important. Spacing S1is selected so that the rails will act as a track to guide the ball and is generally from about 3 to 5 inches for a standard-sized ball. Spacing S2is selected to be approximately equal to the diameter of the ball (e.g., from about 9.5 inches to 11 inches), so that the ball will stay positioned against the conveyor belt and not have excessive lateral movement during its vertical travel through the frame. The upper sections34B,36B define a shooter track that, in cooperation with the rotational force of the spinning shooter wheel18, ejects the ball from the device. Referring toFIG.5, the radius of curvature R of the inner surface of upper sections34B,36B, and the height H1of the upper sections combine to define the angle at which the ball will be ejected from the device (the release angle). The radius of curvature R may be, for example, from about 10 to 14 inches from the center of the shooter wheel, for example from about 11.5 to 13.5 inches. The height H1may be, for example, from about 16 to 26. The release angle RA is generally selected to be from about 40 to 80 degrees, for example from about 45 to 60 degrees. The overall height of the device, from the bottom of the frame to the highest point on the upper sections (height H2) is typically from about 16 to 36. This combination of dimensions allows a user to shoot a basketball from about a wide range of distances from the target, in some implementations from 2 to 100 feet away from the target, depending on the speed of the shooter wheel as selected by the user. Platform17, best seen inFIGS.1A and2B, is used to attach the device10to a user's wheelchair by removing the armrest of the wheelchair and passing the armrest supports through openings42and then replacing the armrest prior to use. The dimensions and configuration of platform17can be modified to adapt the device for attachment to various wheelchair models, or other attachment methods may be used. Platform17may also be used to support a battery (not shown) if one is needed to power the conveyor belt and shooter wheel. FIG.3provides a detailed view of the conveyor belt system44. Conveyor belt system44includes the conveyor belt14discussed above, which may for example be made of polyurethane, a belt-driven lower wheel48and a non-driven upper wheel50. Wheels48and50are mounted between support plates38and40discussed above. In some implementations, the material of the conveyor belt surface (the surface that contacts the ball) has a relatively high coefficient of friction to prevent slippage of the ball as it is conveyed upwards. However, in some implementations this is not necessary, for example if sufficient force is applied to the ball so that slippage is minimized. The lower wheel48is driven by drive belt52, which in turn is driven by a motor54. Motor54, which may be, for example, a 12V electric motor, is configured to be actuated by the user, as will be discussed below, and to run the conveyor belt at a speed of from about 20 to 40 ft/min. This belt speed range can be accomplished, for example, by having the motor spin at about 90 to 110 RPM and the drive roller spin at about 50 to 60 RPM. In the implementation shown, the drive belt52is tensioned by a spring tensioning assembly56, however this can be accomplished by other belt tensioning techniques. The conveyor belt may be, for example, about 1 to 4 inches wide. If the belt is wider, the edges of the belt will not contact the ball, whereas if the belt is narrower it may not create enough friction to lift the ball. A positioning roller58, best seen inFIG.1, is mounted between support plates38,40just below the upper wheel50. This positioning roller pushes the belt outward, towards the opposed surfaces of the rails34,36near the transition between the lower sections of the rails and the upper arcuate sections, pushing the ball up into contact with the shooter wheel and shooter track. In some implementations, the positioning roller deflects the belt by about 0.25 to 0.75 inch. The contact length of the conveyor belt14is selected to lift the ball from the floor and convey it vertically to a point where the ball contacts the shooter wheel18. The length of belt52can be, e.g., about 24 to 36 inches, for example from about 30-32 inches. The contact length of the belt with the ball, i.e., the distance from the top of the upper roller to the bottom of the lower roller, can be, for example, about 14 to 18 inches. The contact length is generally selected to allow enough room for the ball to be held in the lower track until it is lifted into the shooter wheel. The shooter wheel system60is shown in detail inFIG.4. Shooter wheel system60includes the shooter wheel18, a shaft62on which the shooter wheel is mounted, a belt64to drive the shooter wheel, and a pair of motors66,68to drive the belt64. The motors may be, for example, 12V electric motors. The speed of the motors is adjustable by the user, allowing the user to shoot the ball from the device at a desired velocity. For example, the speed of the motors can be adjusted between about 100 and 4000 RPM, e.g., from about 1000 and 3500 RPM. The shooter wheel18has a tire that is configured to grip the ball during shooting. The better the grip, the more efficient the shooter will be and the less chance there will be that slippage between the ball and tire will occur during shooting. The grip provided by the tire is dependent on the material of the tire, which is preferably relatively soft and tacky, and the tire pressure. Preferably the tire is inflated to a relatively low pressure, e.g., between 5 and 20 psi. The tire pressure is important because it affects the pressure between the ball and the inner rails. With that being said, the contact between the shooter wheel and rails is affected by both the air pressure of the tire and the air pressure of the ball. The shooter wheel diameter is important because it affects the amount of time when the ball is directly contacting the wheel. In some implementations the shooter wheel is from about 6 to 12 inches in diameter. If it is too small, there will not be enough contact. If too large, it will become bulky and add more weight than necessary. The shooter wheel diameter also affects the motors and how strong they need to be in order to spin the wheel fast enough. As discussed above with reference toFIG.1A, the device10includes a control panel20. Control panel20is in electrical communication with a controller (not shown) which sends signals to the motors discussed above. The controls on the control panel are configured to be easily used by a user with limited mobility and motor control. The controls preferably include a switch that allows the user to actuate the conveyor belt system, a switch that allows the user to actuate the shooter wheel system, and a knob that allows the user to adjust the speed of the shooter wheel. In use, the user first turns on the conveyor belt system just long enough to pick the ball up off the floor and feed it into the frame, and then shuts off the conveyor belt system, at which point the slight interference fit between the ball and frame/belt will hold the ball in the desired vertical position. The user then drives his or her wheelchair to the desired shooting position relative to the basket. When in position, the user turns on the switch to actuate the shooter wheel system, uses the knob to adjust the shooter wheel speed, and finally re-actuates the conveyor belt system to feed the ball into contact with the shooter wheel. The device will then eject the ball from the shooter track towards the basket. This sequence of steps provides the user with the satisfaction of utilizing skill in shooting the basket despite the user's limited mobility and motor control. OTHER EMBODIMENTS A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. For example, while in the device shown inFIGS.1-5the release angle of the device is fixed, in some implementations the device is configured to allow a user to adjust the release angle. For example, as shown inFIG.6an attachment100can be added to the device. Attachment100includes two rail extensions102,104that serve to adjustably increase the length of the inner rails34B,36B of the upper shooting section32. The rail extensions are connected by a connecting member106for stability. Rail extensions102,104include arcuate slots108,110through which the rail extensions are slidably mounted on the rods28, allowing the attachment100to slide relative to the upper shooting section32in order to lengthen/shorten the arc that determines the release angle of the ball. The movement of the attachment is preferably controlled by an electrical actuator or a motor mounted on the back side of the rails (not shown) to allow adjustment to be accomplished by the user. As the attachment is retracted, the release angle will increase, allowing for closer shots, and conversely as it is extended, the release angle will decrease, allowing for longer shots. In such implementations, an additional actuator (e.g., a switch or dial) is provided on the control panel20, allowing the user to adjust the position of the attachment relative to the frame, for example between discrete settings like close/middle/far or with continuous variability. Additionally, while in the implementation described above pressure is applied to the ball during conveying as a result of the spacing between the conveyor belt surface and the guide rails, in some implementations pressure can be applied by spring-loading the conveyor belt such that the conveyor belt surface is biased towards the ball. The springs would allow for tension adjustment to ensure the ball will not slip while in the lower track. Spring-tensioning would also eliminate the need for the positioning roller because it would aid the ball in contacting the shooter wheel. The springs may also be configured to allow vertical adjustment of the bottom of the conveyor belt in order to be able to adjust the distance between the bottom of the conveyor belt and the ground to facilitate picking the ball up. If desired, the wheels16can be configured to allow vertical adjustment of the spacing between the bottom of the conveyor belt (and the adjacent rails) in order to facilitate picking the ball up. Accordingly, other embodiments are within the scope of the following claims.
15,299
11857860
DETAILED DESCRIPTION Referring to the drawings, wherein like reference numerals are used to identify like or identical components in the various views,FIG.1schematically illustrates a system10that may be used to provide a user12with an enhanced golf experience before, during, and after a round. In particular, the system10may provide electronic scorecard functionality, may provide in round tips and yardages, may aggregate and summarize data relating to a golfer's ability, may provide customized pro tips for improving the golfers game, and may fully integrate with social media and/or other local golfers to share accomplishments and/or challenge community records. The system10includes a centralized server14that is in at least periodic, bidirectional data communication with one or more portable computing devices16(i.e., “client devices16”) via a network18such as the Internet.FIG.1schematically illustrates a system10that includes three portable computing devices16a,16b,16c, with each device being operated by a different respective user12a,12b,12c. It should be appreciated, however, that this is purely illustrative, and the system10may actually operate with a near unbounded number of devices16and users12. As generally illustrated inFIG.2, the server14may include one or more processors20that are configured to execute specialized software22to aggregate user data and facilitate a desirable user experience via the client devices16. More specifically, the software22run by the server14may be operative to construct and/or maintain one or more databases, stored within non-volatile memory24, that contain user account data and preferences26, golf course data28, and user play data30. The server14may include a network interface32, and a means for administrative access and management such as a terminal, or direct remote login (e.g., via the network interface32). In one configuration, the portable computing (“client”) device16may be a “smart phone”-style cellular telephone (“smart phone”), or a device with similar mobile data processing and display functionality. As generally illustrated inFIG.3, each client device16may include a processor40in communication with non-volatile memory42, a user interface44, a GPS receiver46, and a wireless radio48that enables two-way communication between the device16and the network18(e.g. the internet or a cellular telephone network). In one configuration, the user interface44may be, for example, a capacitive touch screen display that includes both a visual display50and a touch-based input digitizer52. The visual display50may be a liquid crystal display (LCD), a light emitting diode display (LED), an organic light emitting diode display (OLED) and/or any similar style display/monitor that can receive a visual data stream from the processor40and display it in a visual manner to the user12. In both the server14and the client device16, each processor20,40may be embodied as one or more distinct data processing devices, each having one or more microcontrollers or central processing units (CPU), read only memory (ROM), random access memory (RAM), electrically-erasable programmable read only memory (EEPROM), a high-speed clock, input/output (I/O) circuitry, and/or any other circuitry that may be required to perform the functions described herein. Additionally, the non-volatile memory24,42may include one or more magnetic or solid-state hard drives, solid-state flash memory, or any other similar form of long-term, non-volatile memory that may be used to store program data, user data, course data, software application algorithms, and the like. The present system10may merge aspects of social media with the game of golf in a manner that enables a user to challenge and compete with friends, share course tips and accomplishments, and gain an introspective view of one's own game by comparing his/her own statistical performance with aggregated performance data from golfers across the network18with a similar ability level. Referring toFIG.2, to facilitate the social media aspects of the present system10, the server14is configured to maintain account data26identifying the existence and preferences of each of the plurality of users12. More specifically, the account data26identifies each user12by an account record53, which details the user's preferences54, and identifies other users56that are connected with that user12to define a social network58. In addition to account data26, the server14may maintain golf course data28that includes a listing of all courses nationwide, with each course record providing, for example, a geocoded location for the course, one or more course attributes (cost, slope, rating, etc), scorecard data (yardages, par, handicap, hole tips), and geocoded hole-by-hole data/locations. Finally, the server may maintain user play data30that includes stored play/round data that is indexed to both user account data26and to golf course data28, as well as being date stamped. FIG.4schematically illustrates a method60that may be performed in whole or in part by the system10, or in conjunction with the system14, at the direction of a user12. While the method60generally illustrates several aspects of the system10, it should be understood that each aspect may have its own standalone utility, independent of the other described aspects. The method60generally begins at62, when a user12interfaces with the portable computing device16to search for and/or select a golf course that he or she intends to play. As shown schematically inFIG.5, the searching functionality80may employ a multifactor approach to provide the user12with a prioritized listing of courses (i.e., the “result set82”) that may be displayed via the display50, and that attempt to anticipate a desired course selection and/or suggest courses that may be desirable to the user12. The course searching may generally be performed at the server14via a search engine84that is in digital communication with one or more of the user account data and preferences26, the golf course data28, the user play data30, and the client device16(i.e., via the network18). Prior to the user12entering any search terms or keywords86, the search engine84may generate an initial result set82according to one or more of the following: past courses played90; player preferences92(e.g., cost, availability, favorites); location of golf courses94; current user location96(as determined by the GPS receiver46); and any user-provided ranking preferences/biasing98(e.g., instructions to sort courses by distance). In one example of a pre-search, the search engine84may use past courses played90to weight courses that the user has repeatedly played (and/or courses with similar attributes to those repeatedly played) higher than those which have been either infrequently played or are markedly different from courses that are typically played. Likewise, in an example, current location96and player preferences92, such as desired cost range, may be used together with the geolocated golf course data94to further weight the result set82that is passed to the user12. By using this multi-factored/weighted search approach, the user12may initially be presented with a listing of courses that have the highest potential relevance (e.g., by location and/or preference) at that moment and for the user's current location. Following any initial presentation of weighted search results82, the user12may enter one or more keywords86that may be used in a fuzzy-logic searching algorithm to adjust the weighting of courses returned by the search engine84via the result set82. Using this approach, for example, following entry of the keyword86, the search engine84may give a stronger preference to course names or locations matching or resembling the entered keyword86. Following the entry of one or more keywords86, the search engine84may update the search results82provided to the client device16. Referring again toFIG.4, after a user12searches for, and selects a golf course that he or she wishes to play at62, the course data record relating to the selected course may be transmitted to the user's client device16. The user12may then be prompted to configure the round and select one or more opponents at64. Configuring the round may include selecting the number of holes to be played100and tee position/difficulty102, such as shown inFIG.6. When selecting the one or more opponents, the user12may have the option of locally entering the names and data for one or more real-time playing partners (e.g., via an “add” button, such as shown at104), linking with one or more networked opponents or playing partners56via the server14, or selecting one or more virtual opponents or previously recorded rounds. In an embodiment where the user12desires to link with one or more networked opponents via the server14(i.e., after the round is initialized), the user12may interface with the server14to locate and select a digital account or profile of a linked user56within the current user's social network58or local community. Selecting the account or profile, the user12may cause a digital message/invitation to be emailed or pushed to a device16associated with the linked user56. The invitation may include a link or virtual button that, when clicked, confirms the linked user's intention to join the round and auto populates electronic scorecard info within client devices16belonging to one or both of the user12and linked opponent56. In a scenario where a desired playing partner is not part of a user's social network58, an option to join the social network58may be provided prior to establishing a networked round. Once a networked opponent has confirmed, the server14may maintain a record of the group composition to facilitate score sharing among the players, enable a virtual leaderboard, and/or to coordinate one or more games. In addition to playing against one or more live opponents (i.e. either locally entered or linked through a network interface), the system10may present the user12with the ability to play against one or more preselected virtual opponents or previously recorded rounds (i.e., “challenges”106), such as generally illustrated inFIG.7. The system10may construct and present a list108of potential challenges106that are deemed to be the most relevant to the user12based on the user's ability, chosen course, and/or social network. Potential challenges106may include the best score achieved on the course by the user12, members of the user's social networks, and/or all users of the system10for a specified period of time (e.g., within 3 days, 1 month, 1 year, or all-time). Referring again toFIG.4, once the round is initialized and all competitors are entered into the scorecard (i.e., either locally or via the network), the user12may begin golfing. During the round, the system10may display real-time distances110between the user's location112and one or more virtual targets114or locations116on the course, such as shown inFIG.8. In one configuration, the virtual target114is overlaid on to a geocoded satellite image118of a given hole that is downloaded into the client device16. The virtual target114may be dynamically repositionable by the user12such as by touching the display device50/digitizer52with a finger, and dragging the target114across the screen. As the target114is moved, the distance110between the users location112and the target114, as well as the distance between the target114and a location116on the course (e.g. the green/hole), may be continuously recomputed, and displayed. In addition to providing real-time yardage information, the system10may also enable a user12to provide/receive crowd-sourced hole tips on a hole-by-hole basis. Hole tips may provide useful commentary on how to most strategically play the hole in a message board format, and may include tips on identifying targets to aim for, lies that provide easier approach shots to the green, ideal distances, or other useful information that the golfing community/social network58sees fit to share. As a user approaches a tee box, or manually indexes to the next hole, the hole tips may either automatically display, or may display if a commentary menu120is selected via the display50/digitizer52. In one configuration, available tips may be separated between pro tips, and community tips. The system10may weight each received tip according to different classifiers, such as the ability level of the comment submitter, whether or not the submitter is a registered teaching professional, the recency of the tip, and the number of people who found the tip useful (e.g., through views and/or upvotes). Comments may be entered and aggregated at the server14for every hole in the database. Upon request by the user12, a listing of hole tips may then be displayed, where higher weighted (i.e., more reputable and/or recent tips are closer to the top. Additionally, the server14may include a functionality for a user with administrative rights to modify or delete one or more of the tips if they are deemed inappropriate or misleading. Referring again toFIG.4, following each hole, or at the completion of a round, the user12may input his or her scores on a hole-by-hole basis at68. In general, the data entry may include total strokes, number of putts, driving accuracy, and/or penalty strokes taken. At the completion of the round, the system10may used this provided data entry to compute one or more ability metrics that may include the number of fairways hit, greens in regulation, scrambling percentage, number of putts per hold/round, % one puts, etc. These statistics may be displayed for the user12in the moment, and/or passed to the server14where they may be analyzed/aggregated at70as user play data30. In one configuration, when aggregating the data at70, the server14may also compute one or more statistical distributions and/or rankings for each of the determined ability metrics. In one embodiment, rather than generating the statistical distributions across golfers of all ability levels, the distributions may, instead, be separately computed for golfers of different average scores or handicaps. Said another way, the distributions may be “binned” based on ability. Once the distributions are generated, a user's individual play data30may be compared with the relevant distributions to determine where each ability metric falls relative to others with the same or similar average or handicap. These comparisons may then be visualized via the display50at72. FIG.9illustrates a manner of visualizing/displaying the relative performance of a user12compared with others of similar ability. In particular,FIG.9illustrates a bubble plot130that has four quadrants, each displaying a different one of four ability metrics: driving accuracy132, greens in regulation134, scrambling percentage136, and puts per round138. In each quadrant, each bubble140represents a completed round of golf, with the bubbles of more recently played rounds being larger and/or closer to a 45 degree diagonal142than more historical rounds. The radial positioning144of the respective bubbles (relative to an origin146) illustrate where the computed metric falls within the network-wide statistical distribution for golfers having similar averages148or handicaps. In general, the dotted circle150represents the mean, and bubbles within the circle150represent user preferences for that metric that are better than average. With continued reference toFIG.9, the bubble plot130provides a quick manner of visualizing a user's consistency across different aspects of his/her game by examining the tightness of the bubble groupings along a radial dimension/spread. Likewise, by looking at the bubble plot130as a whole, the user12can quickly identify specific aspects of his/her game that need practice and/or further improvement. Referring again toFIG.4, once a golfer's consistency and relative ability metrics are understood, the system10may be configured to provide the user12with tailored game improvement tips that target deficient areas as may be generally illustrated via the bubble plot130(at74). These game improvement tips may include workouts, drills and/or pointers that may be performed at home on the driving range, or on the course. In one embodiment, the determination of the one or more game improvement tips may be based off a comparison between the golfer's ability metrics and the statistical distributions of golfers within a desired average or handicap range. More specifically, the system10may provide feedback on which aspects of the user's game may need improvement to achieve a desired score range. This may operate by comparing the user's current metrics against averages from the desired score band. The system may identify the top one or two lowest or worst performing metrics relative to the new average, and can then provide targeted training tips, exercises, or drills to help improve the golfer's performance. In this manner, the training tips may highlight only those aspects that would best aid in reducing the user's score. In a further extension of the present design, instead of simply relying on raw scores, the radial positioning144of each bubble may be normalized according to the difficulty (i.e., slope and rating) of the course and the tee location that was played to give rise to the bubble. In this manner, comparatively poor performances that are caused by playing a more difficult course may be adjusted to provide a more direct comparison with the user's performance on comparatively less challenging courses. Referring again toFIG.7, in addition to selecting one or more challenges106that consist of previously recorded rounds (i.e. rounds that were actually played by others), the system10may also enable a user12to compete against one or more purely virtual opponents as “challenges.” In one embodiment, a virtual opponent may be a simulation that is derived from the statistical distributions used to construct the bubble plots. In this manner, the user12may initially specify the average or handicap of a virtual golfer that he or she wishes to challenge (alternatively, the system10may pre-select a virtual golfer with a similar or marginally better average/handicap). The system10may then simulate the performance of that virtual golfer on a hole-by-hole basis for the chosen course. This simulation may use similar probability distributions to those used for constructing the bubble plots130for generating hole-by-hole scores according to a probabilistic model. Examples of ability metrics that may be used in the model include for metrics such as fairways hit, scrambling percentage, greens in regulation, total puts, and/or scores relative to par for the given hole difficulty, length, and/or par. As with any of the challenges106, the scores of the virtual playing partner would be populated into an electronic scorecard of the user12after the completion of each hole. In addition to maintaining the actual course layout data, the server14may further maintain one or more virtual leaderboards or rankings on a course-by-course basis (e.g., via the golf course data28). These virtual leaderboards may be indexed to the user account data26to provide identities of those who hold top spots, and may be indexed to user play data30to reference the scores and date when the round was played. In this manner, the leaderboards may be capable of being filtered by recency of play (e.g., past week, month, year, all-time), and a user's actual round performance may be ranked either raw terms, or as a percentile against performances within the given date filter. Additionally, following the completion of any round, the server14examine existing leaderboards and/or recompute all available challenges so that the listing may be quickly accessed at the start of a new round. Referring back to the social media aspect of the present system10, following the completion of a challenge or accomplishment of a particular achievement (e.g., streak of birdies, bogey or better, par or better, hit all fairways, play on a number of different courses or rounds, etc), the user12may push a notification out to his/her social network58, where the notification may be viewed, for example, on an active news feed. Additionally, a user12may enter or tag one or more pieces of equipment, which may also be shared with the network58. In a further embodiment, rather than a user12having to manually enter score data, the system10may be configured to automatically track and log a user's performance. To accomplish this, the system10may attempt to record the position and occurrence of each shot using, for example, the GPS receiver46and some means of a user input, such as the digitizer52or accelerometer associated with either the client device16or a linked smart-watch. More specifically, when the user12addresses his or her ball to take a stroke, the user12may provide some means of input to indicate to the client device16that a shot is about to commence. This input may include, for example, tapping a screen of the smart phone or smart watch or shaking the smart phone/smart watch in a particular manner or for a particular duration, which may be detected by a motion tracking accelerometer. Once the client device16receives the indication from the user12that a shot is about to commence, it may poll and log the GPS location data at that moment. The client device16may then analyze the GPS data to determine the relative location of the user12, and may use the indication of an immanent stroke to increment the user's score. In one embodiment, the client device16may directly compare the GPS location with one or more geocoded course boundaries (e.g., edge of the fairway or boarder of the green), which may be downloaded from the server course data28, to better understand the user's lie and/or which hole to assign the incremented stroke to. In an embodiment where the course data28does not contain specific boundary data, the GPS position may be initially located within a downloaded image of the hole, such as shown inFIG.8. The client device16may then use image analysis techniques to determine and extract boundaries between fairway and rough, or between the green, rough, and/or fairway. Once the boundaries are extracted from the image, the user's location may be further analyzed to determine the lie and/or which hole to assign the incremented stroke to. In a further embodiment, the client device16and/or smart watch may provide the user12with an ability to indicate which club is being used prior to marking the shot at address. Upon marking the next shot, the client device16may determine a distance and/or accuracy metric for the club used in the previous shot (i.e., by determining the distance between the two recorded GPS locations and/or by comparing the second GPS location with a line drawn down the center of the fairway). The system10may aggregate the determined shot statistics on a club-by-club basis and compute one or more statistical distributions for distance and/or accuracy (i.e., “club statistics”). These club statistics may then be provided to the user12to illustrate certain hitting tendencies, or may be used to provide customized pro-tips (e.g., drills or techniques to reduce shot distributions/scatter), or to suggest product improvements (i.e., to gap-fit existing clubs, suggest more accommodating products, or custom fit new clubs). In general, the present system10leverages the existence of a broad network of users to provide a given user with an enhanced golf experience and the ability to make customized game improvement tips based on comparisons between the user's personal ability metrics and statistical distributions that are constructed across all users of a similar average or handicap. “A,” “an,” “the,” “at least one,” and “one or more” are used interchangeably to indicate that at least one of the item is present; a plurality of such items may be present unless the context clearly indicates otherwise. The terms “comprises,” “comprising,” “including,” and “having,” are inclusive and therefore specify the presence of stated items, but do not preclude the presence of other items. As used in this specification, the term “or” includes any and all combinations of one or more of the listed items and can properly be read as “and/or” even if not explicitly stated as such. When the terms first, second, third, etc. are used to differentiate various items from each other, these designations are merely for convenience and do not limit the items.
24,576
11857861
DETAILED DESCRIPTION The inventors have recognized and appreciated designs for an athletic training system, including apparatus and software, that aids athletes in enhancing their physical performance by incorporating a neuropsychological model for cognitive brain training in conjunction with physical exercise. In some embodiments, the apparatus and/or software may be based on cognitive brain training through tasks that have been shown to activate the area of the brain associated with mental fatigue known as the anterior cingulate cortex (ACC) found within the prefrontal cortex. In some embodiments, the neuropsychological cognitive tasks that are used include the Stroop Task, Psychomotor Vigilance Task (PVT), Go/No Go Task, Continuous Performance Task (CPT), Stop Signal Task (SST) and/or other similar tasks. One or more such tasks, which require a continued level of focus and inhibitory control creating a mentally fatigued state in the athlete, may be performed in conjunction with physical exercise in order to create adaptation and improve resilience to mental fatigue with continued practice by the athlete. In contrast to known research set-ups, an athletic training enhancement system as described herein may be practical and commercially viable as it does not require an athlete to assume unnatural positions in order to interact with computer input and output devices while performing the physical task. Rather, in accordance with some embodiments, an ergonomic input device may be used for cognitive tasks in conjunction with physical exercise across a plurality of different sports without compromising range of motion, eye-hand or athletic form. Techniques as described herein are amenable to implementation so as to be easily portable or extensible to different sports and physical movement modalities. In some embodiments, the disclosed techniques may be extended to sports that require both a free range of motion and eye-hand coordination, such as cycling, strength training, rowing, swimming, running, rugby and basketball. In some embodiments, a simple and portable user interface device, such as a button or other sensor that detects movement of a portion of a user's body, may interface with a computer executing software that processes inputs and generates outputs to implement an athletic training system. The user interface may be integrated with a support structure so that it may be worn by a user or attached to a piece of athletic equipment. A button, for example, may be attached to a strap, which a user may hold or may be mounted to equipment, such as a bicycle handlebar. Alternatively or additionally, a sensor may be integrated into an item worn by a user, such as a glove or other piece of clothing or a wrist band. In some embodiment's, a training enhancement system may alternatively or additionally provide user stimulus based on motivational techniques and cognitive recovery protocols, which may also be used in conjunction with physical training. In some embodiments, a training enhancement system may perform a cognitive fatigue assessment to help the athlete calibrate their level of daily training activity. I. Computing Systems The systems and methods described herein rely on a variety of computer systems, networks and/or digital devices for operation. In order to fully appreciate how the system operates, an understanding of suitable computing devices and systems is useful. The computing devices, systems and methods disclosed herein are enabled as a result of application via a suitable computing device (including without limitation mobile devices such as smartphones and tablets). In at least some configurations, a user executes a browser on a computer to view digital content items on a display associated with the computer. Digital content may be stored or generated on the computer or may be accessed from a remote location. For example, a computer can obtain content by connecting to a front end server via a network, which is typically the Internet, but can also be any network, including but not limited to a mobile, wired or wireless network, a private network, or a virtual or ad hoc private network. As will be understood very large numbers (e.g., millions) of users are supported and can be in communication with the website at any time. The user may utilize a variety of different computing devices. Examples of user devices include, but are not limited to, personal computers, digital assistants, personal digital assistants, cellular phones, mobile phones, smart phones, tablets or laptop computers. The browser can include any application that allows users to access web pages on the World Wide Web. Suitable applications include, but are not limited to, Chrome®, Brave®, Firefox®, Microsoft Edge®, Apple®, Safari or any application capable of or adaptable to allowing access to web pages on the World Wide Web. Primarily, a user may download an app, e.g., onto the user's portable computing device, in order to perform brain training and mental recovery tasks on the user's hand held device or other user computing device. A computer may have one or more processors that may execute computer-executable instructions stored in non-transitory computer-readable storage media, such as volatile or non-volatile memory. A computer may have one or more input devices, such as a keypad or touch screen for receiving tactile input. The computer may have a sound input, such as a microphone, for receiving audible input, such as speech that may be recognized as commands. The computer alternatively or additionally may have a camera to receive input in visual form. Further, the computer may have interfaces, such as a wireless interface, USB port or other I/O port, that may be connected to sensors or other input devices. For example, one or more sensors, such as a pulse sensor, sweat sensor or other sensor that provides an output indicative of physical activity or exertion may be wirelessly coupled to a computer. A computer may have one or more output devices, such as a display screen or speaker. The input and output devices may be integrated into one physical unit or may be coupled to a unit via wires or wireless connections. These components integrated into a or coupled to a computer may be accessed by programming of the athletic training system to provide output to or collect input from a user of the system as described further herein. II. Cognitive Brain Training Described herein is a training system for athletes and other users with both an apparatus and software-based methodology for cognitive brain training to be done in conjunction with physical exercise. The system may have one or more components that interact with a user to reduce the effects of mental and physical fatigue and improve overall athletic performance. These components may drive interaction with the user both before, during and after a training session. During a physical training session, the system may guide the user in performing cognitive tasks that train the user's brain to resist cognitive fatigue. The system may also collect inputs about the user's physical exertion and performance as well as cognitive fatigue, for adapting guidance provided on physical exertion or adapting cognitive training tasks. The system may also render motivational content to the user. Before a physical training session, the system may collect input from the user, including on phrases that the user considers motivational. Inputs may also be collected for calibration of the system. After a training session, the system may collect inputs indicative of user cognitive or physical fatigue, including through self-assessment inputs, and may output cognitive and physical metrics associated with the training session. The following is a detailed description of an exemplary embodiment of such an athletic training system and its use by the athlete inclusive of all of the components described here in. First an athlete turns on or enables the input device to be used during cognitive training inFIGS.1A-D,2,3. In the case of a tactile-based input apparatus embodiment inFIGS.1A-D, the athlete may attach pressure sensitive buttons102and straps100a, to a training machine such as on the handlebars of an indoor bicycle trainer116inFIG.1Cby attaching the tactile apparatus with a clip114and strap100aor by attaching the tactile-based input apparatus to their hands118inFIG.1Dwith pressure sensitive buttons102and straps100b. The tactile-based apparatus device inFIG.1Bis made up of a large button surface area cap104, a waterproof top enclosure106that covers the printed circuit board (PCB)108, a battery110and a waterproof bottom enclosure for the PCB112, and a strap100afor attaching the button to a training machine. InFIG.4the tactile-based button118then sends wireless signals130to a portable computing device such as a smartphone132or desktop computer running a custom application. In additional embodiments, such as the gesture-based input apparatus version120inFIG.2, the athlete attaches the gesture device120to the hands with motion sensors122integrated into a glove, strap or other hand-held device. InFIG.4the gesture-based apparatus120then sends wireless signals interpreted by software130running on a portable computing device such as a smartphone132or desktop computer. In the case of the voice-based input version of the embodiment inFIG.3, the athlete uses voice commands124or other spoken inputs that are interpreted by the software126running on a portable computing device such as a desktop computer or smartphone and ensures that the portable computing device's microphone is enabled128. After either the tactile button (FIGS.1A-D), the gesture (FIG.2) or the voice-based (FIG.3) input device is enabled and (where applicable) connected to the portable computing device, the athlete selects a workout inFIG.5from one of the choices available134from within the software application and the workout begins. At the start of the workout, the athlete may be asked to perform a calibration test (FIG.6) that records the athlete's perceived level of effort. Perceived effort may be represented using a point scale system136that measures the athlete's rating of perceived exertion (RPE) at different physical output levels such as “17—Very Hard”138where the athlete exerts physical effort to meet that perceived level of effort indicated on the scale136. As the athlete completes the calibration test, standard physiological measures from this test are saved into lookup tables for further analysis (FIG.13) such as power measured in watts for the functional threshold power (FTP) lookup table182and heart rate measured in beats per minute for the lactate threshold heart rate (LTHR) lookup table186. In this way, the user's perceived physical effort may be correlated with measured values before, during and after the workout in order to track cognitive and physiological performance over time. Additionally, these calibrated measures may be used during a training session to adapt the level of difficultly automatically for various cognitive tasks based on the user's perceived level of effort. For example, if the user's rating of perceived exertion becomes reduced, even with the same amount of physical and cognitive stimuli as prior workouts, this may indicate a positive adaptation to the cognitive tasks, and the task difficultly may automatically increase or decrease in length, complexity or other stimuli depending upon the training goal or workout selected by the user. During the workout inFIG.7, the athlete is presented with different cognitive training interfaces based on their training goals featuring various neuropsychological tasks that target specific areas of the brain and brain pathways helpful for overcoming cognitive fatigue and improving athletic performance. For instance, the neuropsychological task known as the Stroop Task140, is an established task for measuring response inhibition and requires the user to have the ability to overcome automatic tendencies in order to respond correctly to each task. For example, in a Stroop task the user will be presented with a color word (e.g., “red”, “green” or other colors) that is presented in one of multiple ink colors (e.g., green, red or other colors). Users are instructed to respond based upon the ink color of the word, not the identity of the word itself. When the color and the word are congruent (e.g., “red” in red ink), the natural tendency to read the word facilitates performance, resulting in fast and accurate responses. When the color and the word are incongruent (e.g., “red” in green ink), the strong, natural tendency to read the word must be overcome to respond to the correct ink color. Similarly, the Stop Signal Task (SST) is also an established task for measuring response inhibition and consists of a “go stimuli” such as a series of left or right arrows that users are instructed to respond quickly to every time they are displayed on the cognitive testing interface. On a subset of the tasks, the go stimulus is followed, after a variable delay, by a “stop signal” such as an audible beep or upward pointing arrow, to which users are instructed to inhibit their response. In other neuropsychological tasks such as the Psychomotor Vigilance Task (PVT), Go/No Go Task, Continuous Performance Task (CPT) users must maintain sustained attention to a specific set of stimuli such as identifying certain objects that appear and disappear on the cognitive testing interface as quickly as possible which measure the user's reaction time, alertness, level of cognitive fatigue and decision making ability. These different neuropsychological tasks are performed in conjunction with physical exercise in order to improve cognitive and physical performance over time. In some embodiments, the difficulty of the cognitive tasks may be adapted during a training session. For example, the level of difficulty of the cognitive task may be increased by increasing the level of complexity of the task questions, reducing the amount of time allowed for each question and/or increasing a target score needed to successfully complete a given cognitive task. In some embodiments, cognitive difficulty may be adapted based on a user's perceived level of effort, which may be determined from the calibrated measures of physical exertion. For example, as a user increases their physical exertion such that their perceived level of exertion increases, the cognitive difficulty of the tasks may be increased. In some embodiments, a control function relating perceived level of effort to cognitive difficulty may be linear. In some embodiments, the level of cognitive difficulty may increase step wise as various levels of perceived effort are reached, but there may nonetheless be a general trend that level of cognitive difficulty increases in relation to perceived exertion. In other embodiments, the control function may be non-linear or may be linear over a range of perceived exertion. Moreover, the control function may be based on parameters in addition to perceived level of effort. Training goals input by a user may be used in the function. For a user that has specified a higher goal, for example, the increase in cognitive difficulty may be greater for each unity of increase in perceived exertion. Alternatively or additionally, time may be a parameter. For example, the duration of planned workout may impact the amount of increase in cognitive difficulty, with more increase for shorter workouts or where there is a shorter time remaining in the planned workout. As an example of another parameter that may impact the control function, the user's sense cognitive fatigue may be used in setting the level of cognitive difficulty. As the user's cognitive fatigue increases, the level of cognitive difficulty may be increase at a slower rate or may be decreased in some scenarios. Further, in some embodiments, the level of cognitive difficulty may also be calibrated based on measurement taken before, during or after an exercise session. As described herein, the system may prompt a user to provide inputs serving as an assessment. That assessment may include a perceived level of cognitive difficulty. During or after presenting one or more cognitive tasks to the user, the system may prompt the user to provide an assessment of perceived difficulty of the task. This assessment may be performed under different conditions to provide different levels of mental challenge such that the variations in the task may be equated to a perceived level of difficulty for the user. Upon determining, during a training session a desired level of cognitive difficulty, the appropriate task and conditions of that task corresponding to that level of perceived cognitive difficulty may be selected. These tasks may be configured to be performed by a user with a simple input device. For example, the athlete tap the tactile buttons when using an input device as pictured inFIGS.1A-D, make gestures when using an input device as pictured inFIG.2, or speak voice commands when using an input device as pictured inFIG.3to input answers to cognitive task prompts. The prompts may be questions as indicated inFIG.7. These responses may be received and processed by a custom application through a series of coded messages transmitted wirelessly130(FIG.4) or by voice inputs124(FIG.3) which are then interpreted by the software150(FIG.8),126(FIG.3) in order to be translated into correct and incorrect answers for the cognitive tasks. The coded messages may take the form of alphanumeric values or phrases that correspond to answers to cognitive questions such as “R1” and “L1”150(FIG.8) or “right” and “left”126(FIG.3) that can be interpreted by the software on the portable computing device to mean “Go Right” input for “R1” or “right” and the “Go Left” input for “L1” or “left” which also correspond to answer buttons on the left142and the right146side of the cognitive testing interface inFIG.7. While the athlete is performing cognitive tasks they are also given prompts by the software, which may be provided through a system output device such as a display144(FIG.7) and audio and visual prompts that appear in order to notify the athlete when their thresholds are above or below target physiological output goals such as maintaining a specific heart rate or maintaining a specific power output measured in watts. The prompts may be presented in a format that a user may observe while performing a physical task. The notifications may, for example, be large colored areas or simple graphical symbols, such as progress bars or dials. The notifications may be presented through a display on a portable device that is mounted in a location that the user can observe while performing physical tasks. In the example ofFIG.8, a portable computing device, such as a smartphone is mounted on the handlebars of a bicycle used for training. The smartphone may execute the software that generates notifications and processes responses to them. In some embodiments, the portable electronic device may also serve as an input device, as a user may provide input through a touch interface of the display. However, it is not a requirement that the portable computer device be in the user's field of view as in some embodiments, notifications may be provided in other ways, such as audibly, through vibration of the portable computing device, or wirelessly to a speaker or other output device. In addition to the display and audio and visual alerts, the physiological target goals may also be represented visually in the form of a real-time progress bar152(FIG.9) that is integrated into the cognitive task questions140(FIG.7) so that the athlete can maintain focus on both their physiological target goals as well as the cognitive tasks at the same time. For example, in the case of the Stroop cognitive task the progress bar will be attached to the bottom of the primary color word that appears on the screen e.g. “PURPLE”152. In other cognitive tasks the progress bar may be adapted to be attached to various shapes or symbols appearing at different locations of the cognitive testing screen so that the athlete can easily keep track of their physiological target goals while still focusing on the cognitive task questions. The progress bar152(FIG.9) visually represents the user's current physiological output percentage compared against their target goals. For example, at rest the progress bar is “empty” with no highlight color on any portion of the bar154showing only a gray background on the bar which indicates that there is no current physiological output being generated by the athlete. When the progress bar is extended to 50% of the allowable space by a highlighted color on the bar156this indicates to the athlete that their current output is only 50% of their target physiological goal. As the athlete continues to increase their physiological output in order to match the target goal the highlighted color portion of the progress bar will continue to extend in length until it reaches 100% of the allowable space158indicating that the athlete has met the target goal and should maintain their current physiological output level in order to ensure that the progress bar remains fully extended (FIG.9). If the athlete exceeds the target goal of over 100% the progress bar will highlight in a different color on the far right edge of the bar160indicating that the athlete should reduce their physiological output in order to achieve the target goal of 100% (FIG.9). Upon the completion of the workout (FIG.10) the athlete is asked to answer a series of quantitative and qualitative questions to self-rate their overall performance including their rating of perceived exertion (RPE) for the workout162and several psychological questions164related to how mentally and physically demanding the workout was for them. The cognitive training software then uses a series of metrics, formulas and algorithms to combine the athlete's self-rated metrics162,164(FIG.10) with the real-time cognitive and physiological output metrics148(FIG.7) to provide reports that summarize the athlete's performance for each workout (FIG.11). FIG.11shows the end of workout report that includes the athlete's workout assessment166, cognitive metrics168, physical metrics170and workout intervals172. The sections inFIG.11, display critical cognitive and physical metrics from the workout. Cognitive metrics may be computed based on user responses received during cognitive tasks, such as total score168measured by the total number of correct answers during all cognitive tasks, reaction time168measured by the average length of time to respond to each of the cognitive questions correctly, accuracy168measured by the percentage of correct answers per interval and overall, answer rate (RCS)168measured by the athlete's total correct answers (per workout) divided by the sum of their reaction time, lapses168measured by counting the total number of slower than average responses to the brain training tasks. These cognitive metrics may be used to adapt the level of difficultly of cognitive tasks during subsequent workouts by automatically increasing or decreasing the level of complexity of the task questions, increasing or reducing the amount of time allowed for each question and increasing or decreasing the target score needed to successfully complete a given cognitive task. For example, if the athlete's answer rate (RCS) is consistently better than their baseline percentage for more than a predefined number of prior workouts then the difficultly level of the athlete's cognitive tasks in their next workout will be increased in order to ensure that they are receiving the right amount of cognitive stimuli to continually improve. Additionally, within the workout assessment166the cognitive metrics may be used to recommend additional training or recovery sessions based on the athlete's performance. For example, if the athlete's perception gap score is significantly lower in terms of performance from their baseline percentage in a given workout then the workout assessment may include a recommendation to temporarily discontinue cognitive training and instead increase the number of cognitive recovery sessions in order to rest and recover before resuming cognitive training. Physical metrics may be computed based on sensor inputs received during a training session, such as heart rate (average)170measured by average beats per minute, heart rate variability (HRV)170measured by the time variance in between each heartbeat, power (average)170measured by the average watts per workout. Combination cognitive and physical metrics may be provided, such as rate of perceived exertion (RPE)170as computed from inputs provided during a self-assessment at the end of the workout and Perception Gap (P-GAP)168computed by comparing the athlete's self-assessment inputs from the end of the workout162,164(FIG.10) with their cognitive168and physical metrics170(FIG.11). FIG.12shows the athlete's cumulative report for all workouts over time that includes a chart of their self-rated vs. physical performance over time174, a summary of their cognitive metrics for all workouts176, a list of their top 3 mantras178and their top 5 best workouts180of all time. Both the end of workout report (FIG.11) and the summary of all workouts over time (FIG.12) utilize metrics, formulas and algorithms based on a series of lookup tables (FIG.13). For example, The Perception Gap (P-Gap) metric which is used to chart the athlete's mental endurance in174(FIG.12) and their workout assessment166(FIG.11) uses lookup tables inFIG.13to compare their subjective rate of perceived exertion (RPE) that they record at the end of their workout162(FIG.10) with their expected RPE based on physiological output metrics recorded during the workout such as their average power recorded in watts182(FIG.13) or average heart rate recorded in beats per minute186. For example, if the athlete's subjective RPE is 12 and their average power for the workout is 151 watts then the perception gap algorithm first determines the athlete's expected RPE, by matching their average power from the workout with the closest matching value in the lookup table184(FIG.13). In this case, the athlete's average power most closely matched an average FTP % of 55%, equivalent to an average power of 150 watts which corresponds to an expected RPE value of 9. Lastly, to determine the perception gap value the athlete's self-rated RPE of 12 is subtracted from their expected RPE of 9 generating a perception gap score of −3. In other words, the athlete's subjective rate of perceived exertion (RPE) was inflated by 3 points above what should be expected based on their physiological training output measured in average power indicating that the athlete had a low level of resistance to cognitive fatigue during training. Another metric used to measure cognitive performance is called Reaction Time (RT) which is the time measured in seconds that it takes the athlete to respond correctly to a given cognitive task question. When a cognitive task question is generated, a date object is created. Every time an athlete answers a question, a time interval measuring the difference between the date/time of when the question was asked and when it was answered is saved in an array. At the end of the interval, the average values from this array are calculated and saved. At the end of the workout, the average response time is calculated for all of the intervals by iterating through intervals, adding the sum of the response times (only if the interval average is greater than 0), and dividing by the total number of these intervals. Yet another metric used to measure cognitive performance is Accuracy (AC) which is the percentage of correct answers to cognitive questions compared to the total number of questions for a given interval or workout. Every time an athlete answers a question the software determines if the answer was correct or incorrect and saves the total correct and total incorrect for current interval. At the end of the interval, the total number of correct answers are added together and are divided by the total number of answers then multiplied by 100 to create the accuracy percentage score (AC). At the end of the workout, the average accuracy is calculated for all of the intervals by iterating through intervals, adding the sum of the accuracy scores (only if the interval average is greater than 0), and dividing by the total number of these intervals. III. Motivational Self-Talk Another feature supported within the custom software application is the integration of self-talk mantras188(FIG.14) that are designed to provide psychological-based encouragement at specific intervals during the workout. InFIG.15the self-talk mantra feature can be configured and personalized by the athlete with specific mantras190that are created by the athlete by pressing on the “+” symbol194in the top right corner of the screen, entering the mantra with the keyboard of a smartphone or a computer then selecting the mantra with the checkbox192that is on the same line directly to the left of the mantra in order to enable it within the feature. The self-talk mantras are also captured and correlated with real-time metrics and cognitive and physiological performance metrics, formulas and algorithms in order to identify the efficacy of each mantra in terms of helping to improve the motivation and performance of the athlete. The top three mantras are then displayed on the cumulative report for all workouts over time178(FIG.12). Additionally, the top performing mantras are adapted within the software to display at a higher frequency during the most difficult stages of the workout to help improve the athlete's cognitive and physical performance. For example, if the athlete is under performing within a complex cognitive task or physically demanding target goal the software will briefly interrupt the workout in order to display a specific mantra that in prior workouts has been correlated with better performance. After the mantra is displayed the software will further score the mantra's efficacy in terms of its impact in improving performance within a short time period after it is displayed. IV. Cognitive Recovery At various times during or after brain training the athlete may engage with different combinations of cognitive recovery and motivation protocols (FIGS.16A and16B). In the case of using cognitive recovery during training, an athlete may use one or more of the recovery protocols during the rest period between training intervals or as preparation for competition as part of the warm up or warm down process during training. In the case of using cognitive recovery after a brain training session, an athlete may use one or more of the recovery protocols as a form of recuperation after a difficult brain training workout. The recovery protocols can be selected from the recovery category screen206(FIG.16A) and include recovery protocol options such as guided breathing196, visualization198, binaural beats200, subliminal priming202and self-talk mantras204. The recovery and motivation software combines these different recovery protocols into a single interface208(FIG.16B) which is capable of playing each protocol in a sequence one after another based on a predetermined pattern for each recovery session. The recovery and motivation software then uses a series of metrics, formulas and algorithms to combine the athlete's self-rated metrics with the real-time cognitive and physiological output metrics to provide reports (FIG.17) that summarize the athlete's level of recovery during each session and over time as well as a chart showing the proportions of each recovery protocol featured in the completed recovery session210. For example, in order to calculate the athlete's subjective self-rated level of relaxation found within the “Recovery Assessment” portion of the report212, the athlete is asked at both the start and end of each cognitive recovery session to rate their current level of relaxation on a scale of 1-10 where 1 equals “not relaxed” and 10 equals “extremely relaxed”. The percent change is then calculated between the athlete's self-rating at the start and end of the session by dividing the absolute value of the difference between the two numbers by the average of those two numbers then multiplying the result by 100 to yield the percent difference e.g. “Your self-rated level of relaxation improved by 85%”. For the section of the report labeled “Recovery Summary”214, various physiological metrics are provided to show the level of physical recovery including heart rate, heart rate range and heart rate variability (HRV). Heart rate is calculated by recording the athlete's heart rate beats per minute (BPM) using an external heart rate monitor or strap that is paired with the recovery software then by calculating the average heart rate by taking the sum of all heart rate values divided by the total number of values. Heart rate range is calculated by an algorithm that scans all of the individual heart rate values and sorts them from the lowest to the highest then takes the first and last values to represent the heart rate range e.g. lowest heart rate value compared to highest heart rate value. Heart rate variability (HRV) is calculated by an algorithm which first measures the time interval between heart beats in milliseconds, then calculates each successive time difference between heartbeats in milliseconds, then squares each of the values, then averages the result, then calculates the square root of the total result, then applies a natural logarithm and lastly applies a scale factor to the logarithm in order to create 0-150 point scale to be displayed in the recovery summary report. An interval summary is also provided within the “Recovery Intervals” section of the report216, which lists metrics for each interval such as the total number of seconds, total number of sets or cycles of the given protocol, the heart rate (average) for each interval, and the HRV for each interval. All of the metrics provided in the recovery report (FIG.17) are compared against a baseline average for each individual metric and the positive or negative percent change of each measure factored into the software's evaluation of the effectiveness of the recovery session for the athlete. V. Cognitive Fatigue Assessment At various times during or after brain training the athlete may complete a cognitive fatigue self-assessment test (FIGS.18A and18B) in order to understand their current level of mental fatigue when compared to their baseline. The software will continuously adapt to the results of the cognitive assessments completed by the athlete, for example if the athlete completes a cognitive fatigue assessment with a result indicating that there has been a decline in their cognitive performance then the software will adapt to recommend an increase in the frequency of cognitive recovery sessions and a decrease in the number of cognitive brain training workouts. As the athlete's cognitive assessment scores improve the software will increase the recommendation to add more cognitive brain training workouts in order to optimize the volume of cognitive stimuli for athletic performance. The cognitive fatigue self-assessment test works by providing output to a user guiding the user through a cognitive testing protocol, such as is illustrated on user interface218(FIG.18A), such as a short reaction time or Go/No Go cognitive task combined with psychological-based questions220(FIG.18B) and physiological measures such as average heart rate and heart rate variability (HRV). For example, as part of a cognitive fatigue assessment, an athlete may complete a short cognitive test such as a simple reaction time test, as illustrated on user interface218, where the athlete presses one of the tactile-based input apparatus buttons every time they see any stimulus such as a predetermined shape or set of alphanumeric characters. After completing the short cognitive test they will also be asked several psychological questions220such as how rested they feel220, their level of readiness to perform athletic training220, their current level of stress220and current level of frustration220. In this example, user inputs representing answers to psychological questions will be acquired with software rendering a sliding input scale such as a visual analog scale220with simple tick marks indicating levels of gradation from very low to very high. Alternatively or additionally, the fatigue assessment system may measure the athlete's physiological metrics such as their average heart rate, heart rate variability (HRV) and other related physiological measures for the duration of the test. All of these data points may then be used to compare against the athlete's baseline average from previous tests to provide an overall cognitive fatigue score along with a cognitive training and recovery recommendation so that the athlete can assess their current state of readiness to perform a training workout or compete in a competitive event. The recommendations provided as part of the cognitive assessment based on the overall cognitive fatigue score may also be used by the software to adjust the level of difficulty of the cognitive tasks by increasing or decreasing the level of complexity of the task questions, increasing or reducing the amount of time allowed for each question and increasing or decreasing the target score needed to successfully complete a given cognitive task. The software may also adjust the default recommendations for cognitive recovery protocols based on the cognitive fatigue assessment score by increasing or decreasing the default recovery session length and automatically prioritizing certain recovery protocols based on the athlete's needs. VI. Flowchart of Software Operations FIG.19shows a select sequence of operational steps describing how the software of an athletic training system works. First, the system/application is turned on222. Next the system checks for data updates from the cloud service224, next any cloud data updates are synchronized with the local database226. The system scans for compatible wireless brain training and biometric devices such as a power meter or heart rate monitor228, and the system pairs with compatible wireless devices230. The system processes coded messages sent from wireless brain training and biometric devices in real time232. The system logic determines if coded messages sent from the wireless brain training device(s) represent correct or incorrect answers to the cognitive task questions for the duration of the workout or recovery session234, next the system processes and stores all results in the local and cloud database236, next the system performs final calculations at the end of the workout or recovery session238and last the system generates final reports that are saved to the local and cloud databases240. Additional alternative embodiments of an athletic training system could be created by eliminating all external input devices and relying solely on the built-in sensors and input systems found on a portable computing device such as a smartphone. Such a solution would rely on sensors built into the computing device such as accelerometers, gyroscopes and or capacitive touch screens to provide manual and automated input methods for answering cognitive test questions. For example an athlete may tap on or tilt the screen of a remote computing device in a specific way in order to respond to cognitive test questions during training. In this example, the movement or taps on the screen could be interpreted by the software running on the remote computing device by accessing its sensor data and translating it to the corresponding correct or incorrect answers during cognitive testing. The built-in sensors on the remote computing device may also be used to receive and interpret actions made external to the computing device itself as a method for answering cognitive test questions. For example, the athlete may double tap on the handlebars of their bicycle trainer with their fingers while the portable computing device is mounted to the handlebars. In this example, a double tap on the handlebars by the athlete could be sensed by accelerometer and gyroscope on the portable computing device and interpreted by the custom software that is part of the athletic training system as representing correct or incorrect answers to cognitive test questions during training. An athletic training system may also be integrated into other training or psychological-based software and hardware to further extend its capabilities or accessibility to athletes for specific sports. For instances where software for guiding a user through cognitive tasks, physical training and/or other actions as described above, is integrated into other software or hardware systems, the input methods for answering cognitive test questions during training may change in order to adapt to the parent software and or hardware being used by the athlete. The athletic training system described herein could also be adapted as a tool for cognitive therapy for patients suffering from cognitive deficits and disorders such as Parkinson's, ADHD, PTSD, OCD and Autism Spectrum Disorder where inhibitory control and cognitive function have been compromised. The embodiments above are intended to be illustrative and not limiting. Additional embodiments are within the claims. In addition, although an athletic training system has been described with reference to particular embodiments, those skilled in the art will recognize that changes can be made in form and detail without departing from the spirit and scope of the invention. Thus, the scope of the embodiments should be determined by the appended claims and their legal equivalents, rather than by the examples given. Example Embodiments Techniques as described herein may be applied in a method for assessing an athlete's level of cognitive fatigue. The method may comprise: receiving through an interface user responses as a user is guided to perform cognitive tasks; assessing level of cognitive and physical stress based on one or more user inputs in response to prompts presented to the user, the user responses and physiological measurements; assessing the user's cognitive fatigue and outputting a summary of the athlete's cognitive fatigue. DRAWINGS—REFERENCE NUMERALS 100abicycle strap for tactile button100bhand strap for tactile button102tactile button104button cap106waterproof PCB enclosure (top)108printed circuit board (PCB)110battery112waterproof PCB enclosure (bottom)114clip for bicycle strap for tactile button116bicycle tactile button and strap118hand tactile button and strap120gesture-based input apparatus122motion sensors for gesture-based glove124voice commands from user sent to software126software interpreting voice commands128smartphone computer microphone130software interpreting wireless signals132smartphone computer receiving Bluetooth wireless signals134selection of brain training workouts13615 point scale for rating of perceived exertion (RPE)138example of level of effort value that the athlete is challenged to produce140Example of cognitive task called a Stroop Task142left answer button144heads up display of target physiological output goals146right answer button148real-time physiological output metrics150software interpreting wireless commands152progress bar with target physiological output goals154progress bar at rest with 0% value156progress bar at 50%158progress bar at 100%160progress bar at above 100%162quantitative rating of perceived exertion (RPE) question164qualitative psychological questions166workout assessment168cognitive metrics170physical metrics172workout intervals174chart of self-rated vs. physical performance176summary of cognitive metrics for all workouts178top mantras180top 5 best workouts182functional threshold power (FTP) lookup table184example of average athlete power and RPE values186lactate threshold heart rate (LTHR) lookup table188self-talk mantras interface displayed during workout190example of self-talk mantra192check box enabling specific self-talk mantra194plus symbol for adding new self-talk mantras196guided breathing recovery example198visualization recovery example200binaural beats recovery example202subliminal priming recovery example204self-talk mantras recovery example206recovery category selection screen208recovery interface showing how self-talk mantras and subliminal priming protocols210recovery chart showing the proportions of each recovery protocol212recovery assessment214recovery summary216recovery intervals218cognitive testing protocol for fatigue assessment220psychological questions for fatigue assessment222system/application is turned on224system checks for data updates from cloud service226cloud data updates are synchronized with local database228system scans for compatible wireless devices230system pairs with compatible wireless devices232system processes coded messages sent from wireless devices234system logic determines correct and incorrect answers236system processes and stores all results in database238system performs final calculations at the end of the workout or recovery session240system generates final reports that are saved to the local and cloud databases.
45,618
11857862
DETAILED DESCRIPTION As used in this document, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Unless defined otherwise, all technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art. As used in this document, the term “comprising” (or “comprises”) means “including (or includes), but not limited to.” Additional terms that are relevant to this disclosure will be defined at the end of this Detailed Description section. The game of tennis has evolved from a country estate pastime to one of the most watched and competitive sports of our time (Wilson, E. Love Game. Chicago. The University of Chicago Press. 2016). On the surface, tennis is a statistics heavy sport. However, one of the most influential aspects of the game, a player's ground stroke “heaviness” remains nothing but an observation, rudimentarily and unscientifically assessed. The world's current top men's player, arguably Rafael Nadal, is universally praised for the “heaviness” of his forehand; how his opponents feel the “weight” and “heft” of that shot as it pushes the opponent back and pressures him to hit off balance. It has been reported that his groundstroke speed is approximately average for the men's tour, but the combination of his speed and movement on the ball, known as top spin, produces an impact greater than the sum of its parts. (See Nadal v Medvedev: a tale of two groundstrokes|AO (ausopen.com), published Jan. 30, 2022.) There are other examples in the opposite direction. For example, it has been reported that Madison Keys strikes the ball hard and flat (her average speed is higher than many on the men's tour) but with insignificant amounts of top spin relative to other top players. (See https://womenstennisblog.com/2022/02/07/most-powerful-womens-forehands/, published Feb. 7, 2022.) The aforementioned players are extreme examples of the concept that a ground stroke is neither considered to be “heavy” if it is hit very slowly with a significant amount of top spin nor is it considered to be “heavy” if it is hit flat (i.e., with minimal rotation) but at a high speed. The world is increasingly becoming real-time data dependent, including in sports. Thus, this document describes a method and system for quantifying “heaviness” of a ground stroke. Quantification of “heaviness” in tennis has significant utility, more so than the traditionally utilized markers of miles per hour (MPH) and rotations per minute (RPM) of a hit tennis ball. The quantification is a novel, cohesive real-time assessment of “heaviness” of a ground stroke. The methods and systems described in this document may be used to assess the absolute heaviness of a given shot, and/or relative heaviness values when inter- or intra-player comparisons are the goal. As illustrated inFIG.1, a heaviness assessment system for a tennis groundstroke includes a court side sensor machine101positioned near the sideline, and outside the boundary, of a tennis court102, in a location such as that where one may locate a courtside radar gun. The size of the device is such that an adult can move it easily between courts. The court side sensor machine101includes various sensors that are aimed at the court102so that the sensors can detect and measure various parameters of a ball104when a player103hits the ball on the court. As will be described below, the one or more sensors may be positioned to detect an area of a tennis court that begins on or before the baseline111of the court and extends through and beyond the service line112of the court. The court side sensor machine includes or is communicatively connected to a computer105that processes data captured by the sensors and can output measured heaviness values on a display, via an audio output, or via transmission to a player's electronic device. An example of a player's electronic device109is a portable or wearable device such as a smartphone, smart watch, or augmented reality headset that is carried or worn by the player103. FIG.2is a block diagram illustrates example features of a heaviness assessment system250. As noted above, the system includes a court side sensor machine101with various sensors positioned to capture data that measures motion of a tennis ball. The sensors may include, for example, a camera201, a radar system203, a lidar system203, microphone or other audio sensor207and/or other sensors that capture data about an environment toward which the sensors are positioned. In some embodiments, if the tennis ball is equipped with various sensors and a signal transmitter, the court side system also may include a receiver that can receive signals from the ball's transmitter. For example, the ball may include a gyroscope, accelerometer, and/or inertial measurement unit (IMU) that is capable of sensing movement and/or rotation of the ball. Examples of such sensors are commercially available at the time of this filing from vendors such as WitMotion Shenzhen Co. Ltd. The ball also may include a radio-frequency (RF), near-field communication (such as Bluetooth), short-range communication, or other transmitter that transmits signals captured by the sensors to the court side sensor machine101or other device that is external to the ball. Optionally, the ball may include a processor that processes the data captured by the sensors so that the transmitter sends the processor's output of such processing to the court side sensor machine101. In addition, in some embodiments the court side sensor machine101may be in communication with an external system that captures data about movement and/or rotation of the ball. If so, the court side sensor machine101may receive the data in the form of transmitted messages, or through a direct interface with the external system such as an application programming interface (API). The court side sensor machine101also will include a processor204for processing sensed data and a transmitter for transmitting data or post-processing results to an external device. The processor204and transmitter205may be part of a computing device that is integral with the court side sensor machine. Alternatively, the system250may include an additional computing device105that includes a transceiver211that will receive data from the court side sensor machine. The additional computing device will include a memory213with programming instructions that, when executed by a processor212, will cause the processor212to analyze the sensor data to measure the linear speed and rotational speed of the ball. The processor212will then use these speed measurements to calculate the heaviness measurement. The processor can output the measured heaviness values on a display214, via an audio output215such as a speaker or audio port, or via transmission to an external device using the transceiver211. The integral computing device of the court side machine101, the external computing device109, and/or the player's electronic device may include a software application that causes the device to provide a user interface via which the player or another user may enter information that the system will use in its calculations, as well as via which the player or another user may view results. For example, referring toFIG.3, at301the user interface may prompt the user to select the court surface (clay, grass, or hardcourt) and (at302) whether the session is indoors or outdoors. In some embodiments, the system's memory or an external memory may include a database of player profiles, which may store measured data or heaviness value calculations from previous sessions for that player. If so, at303the system may prompt the user to create a profile for, or select a profile of, the individual player that the heaviness assessment system will evaluate in the next session. Optionally, at304the user may also be prompted to choose either match mode or practice mode for the session, and (at305) also to identify the player's opponent for that session. (These data points are for informational purposes, and they may be stored in the player's profile and/or included in a session report when the session is ended and data captured or generated in the session is reported to the player.) Finally, at306the user may use the user interface to activate the session and begin to monitor the court for ball movement. At any time before, during or after the session, the system may prompt the user to to indicate whether or not to store the data307, and also to provide an identifier of a device or address to which the user would like the system to send its captured and generated data about the session308. The ordering of the actions shown inFIG.3only intended to be an example; the steps shown may be performed in any order. In addition, some steps may be omitted and other steps may be added. When used to assess a player's swing during a session, the sensors of the court side system will be positioned, and/or the sensors of the ball will be configured, to capture aspects of the ball's movement in an area through which the ball will travel after the swing. For example, to assess a player's groundstroke, the sensors may be positioned to detect an area that begins on or before the baseline of the court and extends through and beyond the service line of the court. Thus, the sensors can capture and record parameters associated with the ball when it lands in this area. Referring toFIG.4, as a player hits a ground stroke from the baseline, the sensors of the system detect the ball, and at401the processor will use the image data from the sensors (such as radar data and/or a video sequence) to measure the parameters of the ball's movement such as linear speed (in units such as MPH) and rotational speed (in units such as RPM) in real time. To do this, the system may include or have access to a trained object detection model, such as a convolutional neural network (CNN) that has been trained to process information about a scene, detect tennis balls and distinguish tennis balls from other elements of the scene, such as static objects or the ground. Alternatively, or in addition, if the sensor is a camera, the system may use an image processing algorithm such as an edge detection algorithm or a motion detection algorithm to detect and recognize the ball and its movement. Then, the system may process the video frame by frame, measure the distance that the ball moves and/or the amount that the ball rotates in each consecutive frame, and calculate the velocities using the frame rate of the camera. For example, if the frame rate is 25 frames per second and the ball moves 140 cm per frame, the resulting linear velocity of the ball will be: 140⁢cmf⁢r⁢a⁢m⁢e×2⁢5⁢fram⁢e⁢ssec×m100⁢cm=35⁢msec Other methods may be used to detect the ball and measure its linear and rotational velocity. For example, the sensors of the system may include microphones or other audio sensors that capture sounds associated with the ball being hit by the player's racket and/or the ball bouncing off of the court. At402, the system will then perform a calculation to determine a heaviness value for the swing based on the physics of translational and rotational kinetic energy, moment of inertia of the ball, the mass of the ball, court surface, and whether the court is indoor or outdoor. The calculations may be based on the physics of translational and rotational kinetic energy as appropriate. (See Brody, Phys. Teach 1984; 22:494-497; Brody H. Phys Teach 2005; 43:503-5; see also Brody, Cross, and Lindsey, The Physics and Technology of Tennis, Racquet Tech Publishing, 2004.) while accounting for the court surface. For example, the system may use a calculation such as: Estimated heaviness value=[(weighting coefficient translational)(KET)]+[(weighting coefficient rotational)KER)](court surface adjustment factor)where:translational kinetic energy (KET)=½(ball mass bmass)(linear velocity VL)2,rotational kinetic energy (KER)=½(moment of inertia)(angular velocity VA)2,VLis the ball's linear velocity as measured by the sensors (in m/s),VAis the ball's angular velocity as measured by the sensors (in m/s),bmass is expected mass of the ball (such as 0.0577 kg, +/−6.07%),the weighting coefficient translational is, for example, 0.8 (range: 0.5-1.1),the weighting coefficient rotational is, for example, 1.2 (range: 0.9-2),the court surface adjustment factor depends on the type of court surface (examples: hard court=1.0 or any value from 0.85 to 1.2; clay court=1.1 or any value from 0.9-1.3; grass court=0.9 or any value from 0.7-1.1), andmoment of inertia is 0.000032 kg×m2 If different units of measure are used, different weighting coefficients may be used in the calculation. Variations of this calculation are also included in the scope of the invention. The process of steps401and402may be repeated for each ball-to-racquet contact that the sensors detect. Ultimately, at403the heaviness value or values will be displayed on an output display, audibly presented, or otherwise presented to a user. The heaviness value may be output as an absolute number without units, or with a unit label that may be predefined or arbitrarily selected. Heaviness of a player's groundstroke can be measured as a multiple axis rotational amplitude change (i.e. RPM increase) evident on the continuous real-time monitoring by the IMU or other sensors, and coupled with the multiple axis forward acceleration data that correlates with a given amplitude change. Within the continuous real-time data measurement, the independent factors of air friction, humidity, temperature, altitude, court surface are implicitly controlled/adjusted for on their impact of the dependent variable of heaviness. Each heaviness factor calculation has its own implicit variation that the system may captured and expressly or implicitly include into the heaviness factor function, which can account for the Magnus Effect. The heaviness factor can be captured in the instant before the groundstroke ball strikes the opposing players racquet. The instant before the opposing player's racquet strikes the ball may be evident in continuous real-time data as the strike of the ball will create an amplitude and acceleration ‘jump’ which will visually be demarcated on the output screen as peaks and troughs of amplitude (x axis is time, y axis is amplitude with acceleration overlaid for example). The output heaviness value will be an estimate for the “heaviness” of the ball based on an approximation of joules of energy. The heaviness value will be assessed for the portion of the player's stroke that occurs in the interval after the ball makes contact with the opponent's side of the court (beyond the service line) and prior to collision with the opposing player's racquet. The real-time heaviness value estimate will be presented to the user as described above. For example, the value may be displayed on an output dashboard on a display of the device. The device may have a WiFi and/or BlueTooth transmitter that can deliver data to an external device (such as a smartphone, a tablet, another external computer, a cloud-based system, a press box data analysis system, etc.) The data may be available in raw and complied form for stroke analysis within session and between sessions. The system may prompt the user to select a type of stroke before or after recording (such as forehand or backhand, or the system may use a trained artificial intelligence model to analyze video of the stroke and classify the detected stroke type. If the system knows the stroke type through any of these or other methods, the system may separate the resulting data by stroke type. Metrics available for multiple strokes of a player may include, but are not limited to, average or mean linear speed (such as in MPH), average or mean rotational speed (such as in RPM), average heaviness value with Standard Deviation (SD), highest heaviness value, and/or median heaviness value with Interquartile Range (IQR). The data may be used at the discretion of the player, coach, or sport commentator to synthesize match strategies, compare players, compare within player improvement, and as a training tool.FIG.4contains additional disclosure of the steps and elements listed above. Optionally, at404if multiple heaviness values are captured over a period of time during which the system also captured video of the player's swings, the system may associate each calculated heaviness value with a time stamp. Then, when outputting the heaviness value, the system may use the time stamps to synchronize each heaviness value with the corresponding time-stamps of various frames of the video and present the heaviness values for each stroke along with the corresponding frames of the video for that stroke. Various elements of the systems described above can be implemented, for example, using one or more computer systems, such as computer system800shown inFIG.5. Computer system800can be any computer capable of performing the functions described in this document, such as those of the court side machine101, the external computing device105, or the player's electronic device109. Computer system800includes one or more processors (also called central processing units, or CPUs), such as a processor804. Processor804is connected to a communication infrastructure or bus802. Optionally, one or more of the processors804may each be a graphics processing unit (GPU). In various embodiments, a GPU is a processor that is a specialized electronic circuit designed to process mathematically intensive applications. The GPU may have a parallel structure that is efficient for parallel processing of large blocks of data, such as mathematically intensive data common to computer graphics applications, images, videos, etc. Computer system800also includes user input/output device(s)816, such as monitors, keyboards, pointing devices, etc., that communicate with communication infrastructure802through user input/output interface(s)808. Computer system800also includes a main or primary memory806, such as random access memory (RAM). Main memory806may include one or more levels of cache. Main memory806has stored therein control logic (i.e., computer software) and/or data. Computer system800may also include one or more secondary storage devices or memory810. Secondary memory810may include, for example, a hard disk drive812and/or a removable storage device or drive814. Removable storage drive814may be an external hard drive, a universal serial bus (USB) drive, a memory card such as a compact flash card or secure digital memory, a floppy disk drive, a magnetic tape drive, a compact disc drive, an optical storage device, a tape backup device, and/or any other storage device/drive. Removable storage drive814may interact with a removable storage unit818. Removable storage unit818includes a computer usable or readable storage device having stored thereon computer software (control logic) and/or data. Removable storage unit818may be an external hard drive, a universal serial bus (USB) drive, a memory card such as a compact flash card or secure digital memory, a floppy disk, a magnetic tape, a compact disc, a DVD, an optical storage disk, and/any other computer data storage device. Removable storage drive814reads from and/or writes to removable storage unit818in a well-known manner. According to an example embodiment, secondary memory810may include other means, instrumentalities or other approaches for allowing computer programs and/or other instructions and/or data to be accessed by computer system800. Such means, instrumentalities or other approaches may include, for example, a removable storage unit822and an interface820. Examples of the removable storage unit822and the interface820may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM or PROM) and associated socket, a memory stick and USB port, a memory card and associated memory card slot, and/or any other removable storage unit and associated interface. Computer system800may further include a communication or network interface824. Communication interface824enables computer system800to communicate and interact with any combination of remote devices, remote networks, remote entities, etc. (individually and collectively referenced by reference number828). For example, communication interface824may allow computer system800to communicate with remote devices828over communications path826, which may be wired and/or wireless, and which may include any combination of LANs, WANs, the Internet, etc. Control logic and/or data may be transmitted to and from computer system800via communication path826. In some embodiments, a tangible, non-transitory apparatus or article of manufacture comprising a tangible, non-transitory computer useable or readable medium having control logic (software) stored thereon is also referred to in this document as a computer program product or program storage device. This includes, but is not limited to, computer system800, main memory806, secondary memory810, and removable storage units818and822, as well as tangible articles of manufacture embodying any combination of the foregoing. Such control logic, when executed by one or more data processing devices (such as computer system800), causes such data processing devices to operate as described in this document. Based on the teachings contained in this disclosure, it will be apparent to persons skilled in the relevant art(s) how to make and use embodiments of this disclosure using data processing devices, computer systems and/or computer architectures other than that shown inFIG.5. In particular, embodiments can operate with software, hardware, and/or operating system implementations other than those described in this document. Terminology that is relevant to this disclosure includes: An “electronic device” or a “computing device” refers to a device or system that includes a processor and memory. Each device may have its own processor and/or memory, or the processor and/or memory may be shared with other devices as in a virtual machine or container arrangement. The memory will contain or receive programming instructions that, when executed by the processor, cause the electronic device to perform one or more operations according to the programming instructions. Examples of electronic devices include personal computers, servers, mainframes, virtual machines, containers, gaming systems, televisions, digital home assistants and mobile electronic devices such as smartphones, fitness tracking devices, wearable virtual reality devices, Internet-connected wearables such as smart watches and smart eyewear, personal digital assistants, cameras, tablet computers, laptop computers, media players and the like. Electronic devices also may include appliances and other devices that can communicate in an Internet-of-things arrangement, such as smart thermostats, refrigerators, connected light bulbs and other devices. Electronic devices also may include components of vehicles such as dashboard entertainment and navigation systems, as well as on-board vehicle diagnostic and operation systems. In a client-server arrangement, the client device and the server are electronic devices, in which the server contains instructions and/or data that the client device accesses via one or more communications links in one or more communications networks. In a virtual machine arrangement, a server may be an electronic device, and each virtual machine or container also may be considered an electronic device. In the discussion above, a client device, server device, virtual machine or container may be referred to simply as a “device” for brevity. Additional elements that may be included in electronic devices are discussed above in the context ofFIG.5. The terms “processor” and “processing device” refer to a hardware component of an electronic device that is configured to execute programming instructions. Except where specifically stated otherwise, the singular terms “processor” and “processing device” are intended to include both single-processing device embodiments and embodiments in which multiple processing devices together or collectively perform a process. The terms “memory,” “memory device,” “computer-readable medium,” “data store,” “data storage facility” and the like each refer to a non-transitory device on which computer-readable data, programming instructions or both are stored. Except where specifically stated otherwise, the terms “memory,” “memory device,” “computer-readable medium,” “data store,” “data storage facility” and the like are intended to include single device embodiments, embodiments in which multiple memory devices together or collectively store a set of data or instructions, as well as individual sectors within such devices. A computer program product is a memory device with programming instructions stored on it. In this document, when terms such as “first” and “second” are used to modify a noun, such use is simply intended to distinguish one item from another, and is not intended to require a sequential order unless specifically stated. The term “approximately,” when used in connection with a numeric value, is intended to include values that are close to, but not exactly, the number. For example, in some embodiments, the term “approximately” may include values that are within +/−10 percent of the value. In this document, the term “connected”, when referring to two physical structures, means that the two physical structures touch each other. Devices that are connected may be secured to each other, or they may simply touch each other and not be secured. In this document, the terms “communication link” and “communication path” mean a wired or wireless path via which a first device sends communication signals to and/or receives communication signals from one or more other devices. The term “communicatively connected”, when referring to two or more devices or systems, means that a communication path exists between the two components. The terms “transmitter”, “receiver”, and “transceiver” refer to equipment via which devices may communicate along a communication path. The path may be wired, wireless, or a combination of the two. The path may be a direct path, or an indirect path through one or more intermediary components. The network may include or is configured to include any now or hereafter known communication networks such as, without limitation, a BLUETOOTH® communication network, a Z-Wave® communication network, a wireless fidelity (Wi-Fi) communication network, a ZigBee communication network, a HomePlug communication network, a Power-line Communication (PLC) communication network, a message queue telemetry transport (MQTT) communication network, a MTConnect communication network, a cellular network a constrained application protocol (CoAP) communication network, a representative state transfer application protocol interface (REST API) communication network, an extensible messaging and presence protocol (XMPP) communication network, a cellular communications network, any similar communication networks, or any combination thereof for sending and receiving data. As such, the network may be configured to implement wireless or wired communication through cellular networks, WiFi, BlueTooth, Zigbee, RFID, BlueTooth low energy, NFC, IEEE 802.11, IEEE 802.15, IEEE 802.16, Z-Wave, Home Plug, global system for mobile (GSM), general packet radio service (GPRS), enhanced data rates for GSM evolution (EDGE), code division multiple access (CDMA), universal mobile telecommunications system (UMTS), long-term evolution (LTE), LTE-advanced (LTE-A), MQTT, MTConnect, CoAP, REST API, XMPP, or another suitable wired and/or wireless communication method. The network may include one or more switches and/or routers, including wireless routers that connect the wireless communication channels with other wired networks (e.g., the Internet). The data communicated in the network may include data communicated via short messaging service (SMS), multimedia messaging service (MMS), hypertext transfer protocol (HTTP), direct data connection, wireless application protocol (WAP), e-mail, smart energy profile (SEP), ECHONET Lite, OpenADR, MTConnect protocol, or any other protocol. When used in this document, terms such as “top” and “bottom,” “upper” and “lower”, or “front” and “rear,” are not intended to have absolute orientations but are instead intended to describe relative positions of various components with respect to each other. For example, a first component may be an “upper” component and a second component may be a “lower” component when a device of which the components are a part is oriented in a first direction. The relative orientations of the components may be reversed, or the components may be on the same plane, if the orientation of the structure that contains the components is changed. The claims are intended to include all orientations of a device containing such components. The features and functions described above, as well as alternatives, may be combined into many other different systems or applications. Various alternatives, modifications, variations or improvements may be made by those skilled in the art, each of which is also intended to be encompassed by the disclosed embodiments.
29,550
11857863
DETAILED DESCRIPTION Skiing and snowshoeing provide wonderful activities for getting exercise and visiting the outdoors. Skiing and snowshoeing provide similar benefits in that participants get exercise and are able to enjoy the outdoors. Disclosed embodiments provide individuals with the benefits of both snowshoes and skis in an efficient and easy to use apparatus. Additionally, disclosed embodiments provide advantages for climbing hills and descending hills that in many cases may surpass the performance of snowshoes and/or skis. Attention is now directed to the Figures, which illustrate various perspective and cross-sectional views of an example apparatus, referred to herein as a shoeski100, that can provide the benefits of both skis and snowshoes. More specifically, as discussed herein, the shoeski100can be arranged and used in a ski mode and one or more snowshoe modes. In the ski mode, the shoeski100allows a user to ski (e.g., slide on or over snow) on the shoeski100. In contrast, the shoeski100allows a user to snowshoe (e.g., walk on top of snow) when the shoeski100is in a snowshoe mode. While the Figures illustrate a single shoeski100, it will be appreciated that a user may use a pair of shoeskis100when skiing and/or snowshoeing. Each shoeski100of a pair may be substantially identical and/or mirror images of one another. As such, a pair of shoeskis100may have a shoeski100for a user's left foot and a shoeski100for the user's right foot. FIGS.1A and1Billustrate top and bottom perspective views of the shoeski100in the ski mode. In the illustrated embodiment, the shoeski100includes a ski102. In the illustrated embodiment, the ski102has a relatively narrow, elongate configuration. The length and width of the ski102may vary from one embodiment to another. The ski102also includes a top surface104and a bottom surface106. As can be seen inFIG.1B, the bottom surface106is generally smooth and planar. In the illustrated embodiment, the ski102also includes opposing ends108,110. One or both of the opposing ends108,110may be curved upwards towards the top surface104of the ski102. The upwardly curved opposing ends108and/or110may facilitate movement of the ski102over the snow as a user skis or snowshoes on the shoeski100. As can be seen inFIGS.1A and1B, an aperture112is formed in the ski102. The aperture112extends through the ski102between the top and bottom surfaces104,106thereof. In the illustrated embodiment, the aperture112has a generally rectangular shape. The illustrated shape of the aperture112is merely exemplary and the aperture112may have other shapes in other embodiments. Nevertheless, the aperture112may have a line of symmetry that extends laterally across the aperture112. Half of the aperture112may be disposed on one side of the line of symmetry (towards the end108) and another half of the aperture112may be disposed on a second side of the line of symmetry (towards the end110). The two halves of the aperture112may be mirror images of one another. In the illustrated embodiment, the shoeski100also includes one or more frame elements114mounted on the top surface104of the ski102. The frame element(s)114may extend at least partially around the sides of the aperture112. A centerboard116is mounted at least partially within the aperture112. The centerboard116may be connected to the ski102and/or the frame element(s)114by way of one or more connection elements. For instance, the centerboard116may be connected to the ski102and/or the frame element(s)114via one or more pivot pins118. The one or more pivot pins118may extend through the ski102and/or the frame element(s)114and into the centerboard116. The pivot pin(s)118may be disposed along the line of symmetry of the aperture112. Such a connection may enable the centerboard116to selectively pivot relative to the ski102, as will be discussed in greater detail below. As will be discussed below in connection withFIGS.2A and2B, the centerboard116may also be connected to the ski102and/or the frame element(s)114with one or more locking components to limit or prevent pivoting of the centerboard116relative to the ski102. Similar to the aperture112, the centerboard116may have a line of symmetry that extends laterally there across. The line of symmetry of the centerboard may be aligned with or parallel to the line of symmetry of the aperture112. Additionally, the outer size and shape of the centerboard116may be similar or identical to the aperture112. A can be seen inFIGS.1A and1B, the centerboard116has an aperture120therein. Similar to the aperture112, the aperture120extends through the centerboard between opposing sides thereof. As can be seen inFIGS.1A and1B, a binding plate122can be selectively mounted or otherwise disposed within the aperture120. The binding plate122may include or be configured to have a boot binding connected thereto. The boot binding may facilitate the connection of a user's boot to the shoeski100. In some embodiments, the boot binding may be configured to secure a snow boot to the shoeski100. In other embodiments, the boot binding may be configured to secure a cross-country ski boot, a downhill ski boot, a snow boot, or other types of footwear to the shoeski100. As can be seen inFIG.1B, when the shoeski100is in the ski mode, the bottom surfaces of the ski102, the centerboard116, and the binding plate122cooperate to form a generally smooth, planar bottom surface of the shoeski100. That is, each of the ski102, the centerboard116, and the binding plate122has a smooth surface that face the same direction and cooperate with one another to form the generally smooth, planar surface of the shoeski100when in the ski mode. The generally smooth, planar surface of the shoeski100can be configured to allow a user to ski on or over snow with limited resistance. With continued reference toFIGS.1A and1B, attention is now directed toFIGS.2A and2B, which illustrate one example embodiment of locking components that can be used in the shoeski100to limit or prevent to centerboard116from pivoting relative to the ski102.FIGS.2A and2Billustrate partial cross-sectional views of the shoeski100, withFIG.2Aillustrating the locking components in a locked configuration andFIG.2Billustrating the locking components in an unlocked configuration. In the illustrated embodiment, the centerboard116includes or has mounted thereon a first locking cap124and a second locking cap126. The first and second locking caps124,126are disposed at opposing ends of the centerboard116. The first and second locking caps124,126include receptacles128,130, respectively. The shoeski100also includes first and second locking clips132,134. In the illustrated embodiment, the first and second locking clips132,134are mounted at least partially within the frame element(s)114; however, the first and second locking clips132,134could be mounted at least partially within the ski102. The first and second locking clips132,134can be selectively moved between locked positions (FIG.2A) and unlocked positions (FIG.2B). As can be seen inFIG.2A, the first and second locking clips132,134extend into the receptacles128,130, respectively, in the first and second locking caps124,126when the first and second locking clips132,134are in the locked position. With the first and second locking clips132,134in the locked position, the centerboard116is locked in place relative to the ski102, thereby preventing the centerboard116from pivoting (about the pivot pin(s)118) relative to the ski102, or vice versa. In contrast, as shown inFIG.2B, the first and second locking clips132,134are retracted from or do not extend into the receptacles128,130, respectively, in the first and second locking caps124,126. In this configuration, the first and second locking clips132,134are in an unlocked position. With the first and second locking clips132,134in the unlocked position, the centerboard116is free to pivot (about the pivot pin(s)118) relative to the ski102. The pivoting of the centerboard116will be discussed in greater detail below. The first and second locking clips132,134may be connected to tabs136,138, respectively. In the illustrated embodiment, the tabs136,138are disposed on top of the frame element(s)114. The tabs136,138are movable towards and away from the centerboard116. When the tabs136,138are moved towards the centerboard116, the first and second locking clips132,134are moved to the locked positions. Conversely, when the tabs136,138are moved away from the centerboard116, the first and second locking clips132,134are moved to the unlocked positions. In some embodiments, the first and second locking clips132,134may be biased to the locked positions (or to the unlocked positions). For instance, a biasing member (e.g., spring) may be positioned adjacent to each of the first and second locking clips132,134and may bias the first and second locking clips132,134toward the locked position. In other embodiments, one or more retention elements may be included to maintain the first and second locking clips132,134in the locked and/or unlocked positions until a predetermined force is applied to move the first and second locking clips132,134to the other position. It will be appreciated that the number, type, and placement of the disclosed locking components used to secure the centerboard116to the ski102are merely exemplary. One or more than two locking components may be used. Additionally, the placement and type of such locking component(s) may vary from one embodiment to another. Attention is now directed toFIGS.3A and3B, which illustrate example securing mechanisms that may be employed to selectively connect the binding plate122to the centerboard116. In the illustrated embodiment, the aperture120in the centerboard116includes a raised boss140and an end of the binding plate122includes a corresponding or mating recess142. The binding plate122may be inserted into the aperture120of the centerboard116such that the boss140is disposed within or mates with the recess142in the binding plate122(as can be seen inFIGS.2A and2B). The boss140and recess142may cooperate to at least partially secure the binding plate122to the centerboard116. The centerboard116and the binding plate122may also include other securing mechanisms to further secure the components together. For instance, the centerboard116may include one or more receptacles144(e.g., that open into the aperture120in the centerboard116) and the binding plate122may include one or more binding plate clips146. Similar to the locking clips132,134, the binding plate clips146may be selectively moved between locked and unlocked positions. When in the unlocked position, the binding plate clips146may be retracted into the binding plate122. In contrast, when in the locked position (as shown inFIG.3B), the binding plate clips146may extend at least partially out of the binding plate122(e.g., out of a side surface thereof). The binding plate clips146may be configured to extend into the receptacles144in the centerboard116to secure the binding plate122to the centerboard116. As can be seen inFIGS.3A and3B, the binding plate clips146may include tabs148that extend out of the top surface of the binding plate122. A user may engage the tabs148to move the binding plate clips146between the unlocked and/or locked positions. Similar to the first and second locking clips132,134, the binding plate clips146may be biased (e.g., via a spring) towards the locked or unlocked position. Additional retention elements may be included to selectively maintain the binding plate clips146in the locked or unlocked position unless a predetermined force is applied thereto. The binding plate122may be selectively secured to the centerboard116by engaging the boss140and the recess142and then pivoting the other end of the binding plate122into the aperture120in the centerboard116. Once the binding plate122is positioned within the aperture120, the bind plate clips146may be engaged with the receptacles144in the centerboard116. To remove the binding plate122from the centerboard116, the remove process can be followed. Attention is now directed toFIG.4, which illustrates the shoeski100is a first snowshoe mode. As can be seen, the ends of the centerboard116have been disconnected from the ski102(e.g., by moving the first and second locking clips132,134to the unlocked positions). Additionally, the centerboard116has been pivoted or rotated about the pivot pin(s)118compared to the ski mode shown inFIGS.1A and1B. In the illustrated embodiment, the centerboard116is illustrated as having been pivoted or rotated about 150°. However, the centerboard116can be rotated more or less than 150°. As can be seen, the now primarily downwardly facing surface of the centerboard116(i.e., the surface of the centerboard116that generally faces in the same direction as the bottom surface106of the ski102) includes a plurality of traction elements150. In the illustrated embodiment, the traction elements150include spikes disposed around the perimeter of the centerboard116. In addition to pivoting the centerboard116, the binding plate122has been remounted to the centerboard116. In particular, the binding plate122has been mounted to the centerboard116so that the boot bindings will be disposed on the side of the centerboard116opposite to the traction elements150. Furthermore, with the centerboard116pivoted as shown, the binding plate122mounts to the centerboard116facing in the opposite direction compared to the ski mode shown inFIGS.1A and1B. In particular, in the ski mode, the binding plate122is mounted so that the end108and a longer portion of the ski102are in front of the user. In contrast, in the snowshoe mode, the binding plate122is mounted so that the end110and a shorter portion of the ski102is in front of the user. When the shoeski100is used in the illustrated snowshoe mode, the centerboard116and connected binding plate122can freely pivot about the pivot pin(s)118, thereby enabling the user to use a snowshoe or walking gait. Additionally, the downwardly facing traction elements150can extend into the snow or ground to provide traction, thereby enable a user to climb hills, etc. In some embodiments, it is desirable to limit the pivoting range of the centerboard116and connected binding plate122relative to the ski102. For instance, it may be desirable to prevent the end110of the ski102from pivoting below the now front ends of the centerboard116and binding plate122. If the end110of ski102gets caught in snow or below something else, the ski102may try to pivot so that the end108thereof swings up towards the user. To prevent this, the second locking clip134may be moved to the locked position so that it extends into the aperture112of the ski102, as shown inFIG.4. In this configuration, the second locking clip134does not engage with either of the receptacles128,130of the centerboard116like in the ski mode. However, if the end110of ski102tries to pivot too far down, the second locking clip134will engage the centerboard116and prevent further rotation of the ski102relative to the centerboard116. FIGS.5A and5Billustrate the shoeski100in a second snowshoe configuration. This snowshoe configuration is similar to that ofFIG.4. In contrast toFIG.4, however, the centerboard116has been rotated about the pivot pin(s)118by 180° compared to the ski mode shown inFIGS.1A and1B. That is, the smooth surface of the centerboard116that faced the same direction as the bottom surface106of the ski102in the ski mode, now faces in the opposite direction from the bottom surface106. As a result, the surface of the centerboard116that includes the traction elements150now faces in the same direction as the bottom surface106of the ski102. Also, unlikeFIG.4, the centerboard116has been secured to the ski102in a manner to prevent relative pivoting or rotation therebetween. In particular, the first and second locking clips132,134have been engaged with the second and first receptacles130,128, respectively. Thus, similar to the ski mode, the centerboard116and ski102are connected together to prevent rotation therebetween. However, in the illustrated snowshoe mode, the traction elements150face the same direction as the bottom surface106of the ski102to provide traction with the ground. Attention is now directed toFIG.6, which illustrates a partial cross-sectional view of an example embodiment of a binding plate160. Except as otherwise described, the binding plate160may be substantially the same or similar to the binding plate122. The binding plate160may be mounted to or removed from the centerboard116in the same or similar manner as the binding plate122. In contrast to the binding plate122, the binding plate160includes a release mechanism162. The release mechanism162includes recess block164. The recess block164includes a recess166that can engage the boss140on the centerboard116in a manner similar to that of the recess142in the binding plate122. The release mechanism162also includes a spring block167, one or more biasing members168, and an adjustment mechanism170. The one or more biasing members168may be disposed between the recess block164and the spring block167. The one or more biasing members168may bias or urge the recess block164away from the spring block167and towards the boss140on the centerboard116. The one or more biasing members168may take a variety of forms, including coil springs. The position of the spring block167may be selectively adjusted using the adjustment mechanism170. The adjustment mechanism170may include one or more bolts disposed between a main body portion of the binding plate160and the spring block167. Rotation of the one or more bolts may move the spring block167towards or away from the recess block164. Movement of the spring block167towards the recess block164may increase the biasing force applied by the one or more biasing members168to the recess block164. Conversely, movement of the spring block167away from the recess block164may decrease the biasing force applied by the one or more biasing members168to the recess block164. The release mechanism162may facilitate the release or disconnection of the binding plate160from the centerboard116. For instance, if the user were to fall or the ski102were to get caught on something, or a similar event, it may be desirable for the binding plate160to disconnect from the centerboard116without requiring intentional action by the user (e.g., moving the tabs148to disengage the binding plate clips146from the receptacles144in the centerboard116). More specifically, the forces from such an event may overcome the biasing force of the biasing members168(e.g., thereby compressing or flexing the biasing members168), which would allow the recess block164to move or pivot away from the boss140and allow the binding plate160to disconnect from the centerboard116. As noted above, the adjustment mechanism170may allow for adjustments to be made to the biasing force applied by the biasing members168. As the biasing force is reduced, the binding plate160can be released from the centerboard116with less force. In contrast, as the biasing force is increased, more force is necessary to release the binding plate160from the centerboard116. Attention is now directed toFIG.7, which illustrates a cross-sectional view of another embodiment of a centerboard116. The centerboard116ofFIG.7may be the same as or similar to the other centerboards116discussed herein. In the embodiment ofFIG.7, the centerboard116also includes one or more spring-loaded rods172connected thereto. The spring-loaded rods172(or a portion thereof) may be positioned in an undeployed state (shown in solid lines) or a deployed state (shown in dashed lines). As shown, in the undeployed state, the spring-loaded rods172(or a portion thereof) may be pivoted, folded, or otherwise retracted into or flush with a portion of the shoeski100. As also shown, in the deployed state, the one or more spring-loaded rods172(or a portion thereof) may be pivoted, folded, or otherwise extended from or out of the shoeski100. In the undeployed state, the spring-loaded rods172may not inhibit the functioning of the shoeski100. That is, the spring-loaded rods172may not inhibit the shoeski100from sliding over snow. In contrast, when in the deployed state, the spring-loaded rods172may help to restrict the shoeski100from sliding over the snow. In some embodiments, the spring-loaded rods100may be biased towards the deployed state. In some embodiments, such as that shown inFIG.7, the spring-loaded rods172may be connected to the centerboard116(e.g., near the raised boss140). When the binding plate122is connected to the centerboard116, the binding plate122(or a portion thereof, such as the recess142) may engage the spring-loaded rods172and move the spring-loaded rods172from the deployed state to the undeployed state. Conversely, when the binding plate122is disconnected from the centerboard116, the spring-loaded rods172may disengage the spring-loaded rods172and allow the spring-loaded rods172to move to the deployed state. In the deployed state, the spring-loaded rods172may raise the ski102partially off of the snow or otherwise interact with the snow to limit or prevent the shoeski100from sliding over the snow. For instance, if a user falls and the binding plate122becomes disconnected from the centerboard116, the spring-loaded rods172may prevent the rest of the shoeski100from sliding away from the user. Disclosed embodiments can be made from carbon fiber or a plastic material but is not limited to these materials. Disclosed embodiments can be made from any material in the industry that fits its application. Disclosed embodiments can be made by injection molding but is not limited to injection molding. It could be made by any industry standard that it requires to function. The thickness, width, and length may vary based upon the end-user's size and weight. In light of the disclosure herein, it will be appreciated that an apparatus for traveling across snow may include a ski, a centerboard, and a binding plate. The ski may have opposing ends, a top surface, a bottom surface, and an aperture extending therethrough between the top and bottom surfaces. The centerboard may be pivotally connected to the ski and disposed at least partially within the aperture in the ski. The binding plate may be configured to have a boot binding connected thereto. In some embodiments, one or both of the opposing ends of the ski comprise curved tips. In some embodiments, the apparatus also includes one or more locking mechanisms configured to selectively limit or prevent the centerboard from pivoting relative to the ski. In some embodiments, the one or more locking mechanisms comprise one or more locking clips mounted on the ski and one or more associated receptacles in the centerboard, the one or more locking clips may be selectively insertable into or removable from the one or more associated receptacles to prevent or allow the centerboard to pivot relative to the ski. In some embodiments, the centerboard includes a first side having a generally smooth, planar surface. In some embodiments, the centerboard includes a second side having one or more traction elements, the second side being opposite to the first side. In some embodiments, the binding plate is selectively connectable to and removable from the centerboard. In some embodiments, the binding plate is connected to either a first side of the centerboard or a second side of the centerboard. In some embodiments, the centerboard comprises an aperture extending therethrough, the binding plate being selectively mountable within the aperture in the centerboard. In some embodiments, the aperture in the centerboard comprises a raised boss and the binding plate comprising a corresponding recess. In some embodiments, the binding plate comprises one or more locking clips and the centerboard comprises one or more receptacles for selectively receiving the one or more locking clips to connect the binding plate to the centerboard. In one example embodiment, an apparatus for traveling across snow includes a ski having a top surface, a bottom surface, and an aperture extending therethrough between the top and bottom surfaces. The apparatus also includes a centerboard connected to the ski and disposed at least partially within the aperture in the ski, the centerboard and the ski being selectively reconfigurable between a ski mode and at least one snowshoe mode. In some embodiments, the centerboard is pivotally mounted within the aperture in the ski. In some embodiments, the apparatus also includes a binding plate configured to have a boot binding connected thereto. In some embodiments, the centerboard comprises an aperture therethrough and the binding plate being selectively mountable within the aperture in the centerboard. In some embodiments, the centerboard comprises a first side having generally planar surface and an opposing second side having one or more traction elements thereon. In some embodiments, the binding plate is selectively mountable in the aperture of the centerboard such that a boot binding connected to the binding plate can be disposed on either the first side or the second side of the centerboard. In another example embodiment, a method for traveling across snow includes providing an apparatus that can be used as either a ski or a snowshoe, selectively configuring the apparatus into a ski mode, and selectively reconfiguring the apparatus into a snowshoe mode. In some embodiments, selectively configuring the apparatus into a ski mode comprises arranging two or more elements of the apparatus to form a generally smooth bottom surface. In some embodiments, selectively reconfiguring the apparatus into a snowshoe mode comprising allowing a first component of the apparatus to pivot relative to a second component, the first component having a first side with smooth surface and a second side with one or more traction elements, the second component having a bottom surface that is smooth. The present invention may be embodied in other specific forms without departing from its spirit or characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
26,557
11857864
DETAILED DESCRIPTION OF SOME EMBODIMENTS In the following description, various aspects of the disclosure will be described. For the purpose of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the different aspects of the disclosure. However, it will also be apparent to one skilled in the art that the disclosure may be practiced without specific details being presented herein. Furthermore, well-known features may be omitted or simplified in order not to obscure the disclosure. In order to avoid undue clutter from having too many reference numbers and lead lines on a particular drawing, some components will be introduced via one or more drawings and not explicitly identified in every subsequent drawing that contains that component. Throughout the figures of the drawings, different superscripts for the same reference numerals are used to denote different embodiments of the same elements. Embodiments of the disclosed devices and systems may include any combination of different embodiments of the same elements. Specifically, any reference to an element without a superscript may refer to any alternative embodiment of the same element denoted with a superscript. Reference is now made toFIGS.1-4.FIG.1shows a user wearing a nobility enhancement system100, according to some embodiments.FIG.2Ashows a schematic plan view of a motorized walking enhancement platform102, according to some embodiments.FIG.2Ashows a schematic plan view of the motorized walking enhancement platform102ofFIG.2A, hiding some components thereof and emphasizing components of a drive assembly124, for clarity.FIG.2Cshows a schematic plan view of the motorized walking enhancement platform102ofFIG.2A, hiding some components thereof and emphasizing components of a pneumatic/hydraulic sub-systems, such as a pneumatic/hydraulic lever height regulator270and a pneumatic/hydraulic braking system194, both sharing a common actuator182.FIG.2Dshows a schematic plan view of the motorized walking enhancement platform102ofFIG.2A, hiding some components thereof and emphasizing components of a pneumatic/hydraulic braking system194, for clarity.FIG.2Eshows a schematic plan view of the motorized walking enhancement platform102ofFIG.2A, hiding some components thereof and emphasizing components of a pneumatic/hydraulic lever height regulator270, for clarity. FIG.3shows a view in perspective of a motorized walking enhancement platform102, according to some embodiments.FIGS.4A and4Bshow side views of a motorized walking enhancement platform102with lever height regulators in various states, according to some embodiments. The terms “including” and/or “having”, as used herein, are defined as comprising (i.e., open language). As shown inFIG.1, a motorized walking system100includes two motorized walking enhancement platforms102, each of which is attachable to one of the legs of a user (e.g., a walker), when walking over a relatively flat ground20. A first motorized walking platform102can be worn on the left foot of the user, and a second motorized walking platform102can be worn on the right foot of the user. Hereinafter, a single motorized walking platform102will be described, simply for ease of discussion and illustration. However, the features to be described for the single motorized walking platform102may be applied to the left motorized walking platform as well as the right motorized walking platform. As further shown inFIGS.2A-3, each motorized walking platform102comprises a base frame104on which the shoe of the user may be positioned. The motorized walking platform102may be attached to the shoe of the user by various attachment means, such as at least one adjustable foot strap118. In some embodiments, the foot strap118comprises a lateral strap section117and a longitudinal strap section119, so as to support the walker's show in both sideways and frontal directions. The foot strap118can be adjustable to accommodate different sizes and types of shoes, different user preferences for tightness, and the like. Additional straps, such as an ankle strap120and a shin/calf strap188can be utilized to enhance the platform's102stability and attachment to the user's leg. The base frame104extends between a front frame portion110and a rear frame portion112, and comprises an upper frame surface106facing upward, toward the torso of the user, and a lower frame surface108facing downward, toward the ground. The upper frame surface106and the lower frame surface108are not necessarily flat, and may each consist curved or otherwise uneven portions, that may be designed to accept various components or articles thereon. Generally, the upper frame surface106is designed to accept the sole of a user's shoe, while the lower frame surface108may include various attachments to mechanical and/or electrical components of the motorized walking enhancement platform102. The term “lower”, as used herein, refers to a side of a device or a component of a device facing the ground20. The term “upper”, as used herein, refers to a direction facing away from the ground20, for example toward the torso of a user wearing the motorized walking system100. The term “flat”, as used herein, refers to a surface that is without significant projections or depressions. The base frame104may be formed as a relatively solid material, which can be uniformly formed from one component, made for example from metallic or relatively rigid polymeric materials, or from several parts rigidly attached to each other to form together a substantially stiff frame structure. The motorized walking platforms102further comprises a drive assembly124attached to the base frame104, configured to enable assisted rolling of the base frame104during standard walking movement of the user. As further emphasized inFIG.2B, the drive assembly comprises at least two pairs of primary wheels130, such as a pair of front primary wheels130aand a pair of rear primary wheels130b. The primary wheels130are configured to be powered so as to rotate about their lateral axes128, when the motorized walking platform102is turned on to an assisted rolling state, as will be elaborated below. The drive assembly124comprises at least two lateral sub-assemblies126, wherein each lateral sub-assembly126comprises a corresponding pair of primary wheels130affixed to both sides of an axle132extending laterally therebetween, such that the primary wheels130are configured to rotate along with the axle132. Specifically, a front lateral sub-assembly126acomprises a front axle132aextending along a front lateral axis128a, coupled to the front primary wheels130aat both sides thereof. Similarly, a rear lateral sub-assembly126bcomprises a rear axle132bextending along a rear lateral axis128b, coupled to the rear primary wheels130bat both sides thereof. The drive assembly124further comprises a drive line136, longitudinally disposed along a longitudinal axis138between the front lateral sub-assembly126aand the rear lateral sub-assembly126b. The drive line136comprises a motor140positioned between the front and rear axles132aand132b, respectively, a pair of speed reduction units144positioned on both sides of the motor140, and a pair of longitudinal shaft members146extending between each one of the speed reduction units144and a corresponding axle132. The term “longitudinal”, as used herein, refers to a direction, orientation, or measurement that is parallel to the longitudinal axis138. When expressed in relation to a direction of walking or movement of a user (e.g., a walker), the term “longitudinal” refers to a direction that is parallel to the longitudinal axis138when all of the primary wheels130of the motorized walking enhancement platform102are in contact with the ground20. The term “lateral”, as used herein, refers to a direction, orientation, or measurement that is perpendicular to the longitudinal axis138, and is parallel to either the lateral axes128. The forward direction26represents the direction of advancement along the longitudinal direction. The motor140comprises a motor shaft142protruding longitudinally from both sides of the motor140. Each side of the motor shaft142is coupled to a speed reduction unit144. For example, a front portion of the motor shaft142is coupled to a front speed reduction unit144a, and a front longitudinal shaft member146ais coupled to the opposite side of the front speed reduction unit144a, extending toward the front axle132a. Similarly, a rear portion of the motor shaft142is coupled to a rear speed reduction unit144b, and a rear longitudinal shaft member146bis coupled to the opposite side of the rear speed reduction unit144b, extending toward the rear axle132b. The motor shaft142and the longitudinal shaft members146maybe arranged longitudinally, either coaxially along or in parallel to the longitudinal axis138. Since both lateral sub-assemblies126are coupled to the same drive line136, the motor140is configured to simultaneously drive the front and rear primary wheels130a,130bat the same speed at any instant. In some implementations, the motor140is a brushless DC (BLDC) motor. The motor140may further be a slotted or slotless BLDC motor. Each speed reduction unit144can include, in some implementations, one or more planetary gear arrangements or other suitable gear reducer assembly arrangements linking the motor to the corresponding lateral sub-assembly126. In still other implementations, gearing arrangements other than planetary reduction gear assemblies could be employed within the speed reduction units144, such as a harmonic gear arrangement. Advantageously, a planetary or harmonic gear arrangement may provide additional torque. Preferably, the drive assembly124is driven by a small and relatively lightweight motor, such as an efficient BLDC motor using speed reduction units144to create the high torque required to start the mobility enhancement system100smoothly under the load of a user wearing the platforms102. BLDC motors have higher torque and power densities than brushed motors, yielding more torque and power in a smaller and lighter package. This significantly lowers the size of the motor compared to utilization of brushed DC motors. As shown, the motor140may be positioned on the lower frame surface108, between the front axle132aand the rear axle132b. The motor load allocated to the rear lateral sub assembly126bmay be significantly lower than the motor load allocated to the front lateral sub assembly126aat all phases of a walker's step. Thus, in some embodiments, each of the front speed reduction unit144aand the rear speed reduction unit144b, and/or each of the front non-differential transmission mechanism148aand the rear non-differential transmission mechanism148b, is configured allow differentiation and optimization of the torque transferred from the motor140to the respective lateral sub-assembly126. According to some embodiments, the motor140comprises a plurality of motor units, such as a plurality of micro BLDC motors, that can be serially mounted on a single motor shaft142, with appropriate speed reduction units144mounted on both sides of the motor shaft142. The drive assembly further comprises at least two non-differential transmission mechanisms148, configured to translate the rotational movement of the drive line136about its longitudinal axis138, to a rotational movement of the axles132about their lateral axes128. Specifically, a front non-differential transmission mechanism148ais configured to translate rotational movement of the front longitudinal shaft member146ato a rotational movement of the perpendicularly oriented front axle132a, which in turn rotates the front primary wheels130a. Similarly, a rear non-differential transmission mechanism148bis configured to translate rotational movement of the rear longitudinal shaft member146bto a rotational movement of the perpendicularly oriented rear axle132b, which in turn rotates the read primary wheels130b. The motorized walking platforms102further comprises a control circuitry154, configured to control at least the functionality of the motor140, and optionally additional components of the motorized walking platforms102. The control circuitry154can be coupled to the motor140and other components of the motorized walking platforms102via at least one transmission line160, configured to deliver signals between the control circuitry154and such components. The at least one transmission line160may be further configured to deliver power, originating from a power source184, to energize the electric components of the motorized walking enhancement platforms102. According to some embodiments, the control circuitry154comprises a processor (not shown), which may be configured for processing and interpreting sensed signals received various sensors as further elaborated below, and configured to control various functionalities of components of the motorized walking enhancement platforms102, via the control circuitry154. According to some embodiments, the processor may include software for interpreting sensed signals. According to some embodiments, the motorized walking platforms102further comprises a communication unit156, which comprises a wireless communication component such as a transmitter, a receiver, and/or a transceiver, configured to wirelessly transmit signals to, and/or receive signals from, the remote-control device60. can be a wireless communication unit156, configured to wirelessly communicate with a remote-control device60. The communication unit156may be provided as an integral part of the control circuitry154, or as a separate component in communication with the control circuitry154, for example via at least one transmission line160. According to some embodiments, the motorized walking platforms102further comprises an ergonomic rear extension116, which may be formed as a rigid curved vertical extension, extending upward from the rear frame portion112, configured to provide adequate support to the backside of the shoe and the user's foot. The ankle strap120may extend from the read extension116. According to some embodiments, the motorized walking platforms102further comprises an ergonomic leg brace186, which may be coupled to a user's leg via the shin/calf strap188, and may house a power source184therein (seeFIG.4A), such as a battery or a plurality of batteries. The battery184can be a rechargeable battery, and can be coupled to electrical components of the motorized walking platforms102, such as the control circuitry154, the motor140etc., via a power transmission cable158. In some embodiments, the power source184can include a plurality of batteries. According to some embodiments, the power source184can include replaceable batteries. In some implementations, the power transmission cable158may extend through the rear extension116, in which case the rear extension116can further serve to support and guide the lower portion of the power transmission cable158. Distancing the power source184away from the base frame104, such as by placing it in a leg brace186secured to the shin/calf of the user, advantageously enables reduction of the overall weight carried by the walker's foot. The distribution of weight of each motorized walking enhancement platform102, and the reduced weight carried by, or coupled to, the respective base frame104, provides for more natural and agile user movement and improves stability. While the control circuitry154and/or the communication unit156are illustrated throughout the figures attached to the base frame104, alternative configurations are contemplated, in which either the control circuitry154and/or the communication unit156may be comprised within the leg brace186. In such configurations, the transmission cable158may serve not only as a power transmission cable, but also as a unidirectional or bi-directional signal transmission line. As mentioned herein above, the functionality of the mobility enhancement system100can be controlled by a remote-control device60, which can be a handheld device utilized to wirelessly communicate with the communication unit156. An exemplary remote-control device60may be provided as a dedicated hand-held or hand-wearable device for communicating with the communication unit156, or as a commercially available mobile device such as a smartphone, a tablet, a smart watch and the like, which may include software commands for communicating with the communication unit156. The remote-control device60includes at least one wireless communication component (not shown) such as a transmitter, a receiver, and/or a transceiver, configured to wirelessly transmit signals to, and/or receive signals from, the communication unit156. According to some embodiments, the communication unit156and/or the remote-control device60, are configured to transmit and/or receive signals to and/or from each other using one or more communication protocols such as Bluetooth, RF, LORA, Zigbee, Z-Wave, Near Field Communication (NFC), or the like. According to some embodiments, the remote-control device60further comprises an input interface61, such as buttons, sliders, a keyboard, an on-screen keyboard, a keypad, a touchpad, a touch-screen and the like. The input interface61enables the user to turn on or off the motorized walking enhancement platforms102, as well as to optionally set various personal attributes such as desired rolling speed and the like. Commands from the remote-control device60may be simultaneously transmitted in real-time to both motorized walking enhancement platforms102, for example—received by a communication unit156of each of the walking enhancement platforms102, which may in turn translate to signals sent by the corresponding control circuitries154of both platforms102, to facilitate rotation of the primary wheels130of both platforms102at the same speed. According to some embodiments, the remote-control device60may allow a user to set various commands to operate and control the motorized walking enhancement platforms102, such as turning on or off, setting up the desired rolling speed, the rates of acceleration and/or deceleration, and the like. Preferably, the remote-control device60may be operated in a simplified manner without requiring the user to look at it during operation thereof. Moreover, the user is not required to further manipulate or hold the remote-control device60during walking, as long as no further change in parameters is desired. According to some embodiments, the remote-control device60further comprises a display (not shown), serving as a visual interface configured to display information which may include, for example, alerts, recommendations, and the like. An application of a remote-control device60(e.g., a smartphone app) can include additional features to improve user's experience, such as navigation assistance, route planning, and integration with urban transport services. The application can further provide battery level indication and real-time speed display functionalities. The connectivity of the motor140, via the drive line136, to both lateral sub-assemblies126via the non-differential transmission mechanisms148, ensures that the entire drive assembly124acts as a single uniform drive-train configured to rotate all primary wheels130at the same uniform speed. The non-differential transmission mechanisms148ensure that the primary wheels130are configured to move only in the longitudinal direction, thereby simplifying the structure of the motorized walking enhancement platform102and potentially reducing the overall weight thereof. While the motorized walking enhancement platform102described herein, includes two lateral sub-assemblies126, each provided with a couple of primary wheels130, other implementations may include more than two couples of primary wheels130, as long as the motor140is coupled, directly or indirectly, to all of the lateral sub-assemblies126, and is configured to drive all of the primary wheels130in unison at the same speed. Thus, any change in rotational speed of any one of the lateral sub-assemblies126is immediately reflected to the other lateral sub-assembly126via the drive assembly124. The control circuitry154is configured to detect any change in the rotational speed of any component of the drive assembly124, such as any one of the lateral sub-assemblies126, the longitudinal shaft members146and/or the motor shaft142. Once such deviation in the rotational speed is detected, the control circuitry154is further configured to provide appropriate signals to the motor140so as to counter the detected change and ensure that the drive assembly124reverts back to the desired rotational speed. In this manner, the revolving speed of all primary wheels130of each platform102is controlled to remain constant and uniform between both platforms102, such that the walker's longitudinal balance is maintained, as if walking on a stable planar surface that travels in constant speed relative to the ground in the direction of walking. The control circuitry154can increase or decrease the amount of power supplied to the motor140, which may affect the speed at which the primary wheels130of the motorized walking platform102rotate. According to some embodiments, the drive assembly124comprises at least one rotation speed sensor190, configured to continuously measure the rotational speed on at least one component of the drive assembly124. According to some embodiments, at least one rotation speed sensor190is coupled to the drive line136. According to some embodiments, at least one rotation speed sensor190is coupled to the motor shaft142, such as the rotation speed sensors190aillustrated on both sides of the motor140illustrated inFIG.2A. While two rotation speed sensors190aare illustrated, mounted over or otherwise attached to the motor shaft142on both sides of the motor140, it will be clear that a single rotation speed sensors190amay suffice. Nevertheless, in some implementations, providing more than a single rotation speed sensor may be beneficial for purpose of redundancy. While not specifically illustrated, other rotating components of the drive line136may include at least one rotation speed sensor190, instead of or in addition to the rotation speed sensor190aof the motor shaft142. For example, in some embodiments, at least one rotation speed sensor190can be mounted over or otherwise attached to at least one longitudinal shaft member146, such as the front longitudinal shaft member146aor the rear longitudinal shaft member146b. Moreover, while the rotation speed sensors190aare shown inFIG.2to be mounted over or otherwise attached to the portions of the motor shaft142protruding from the motor140, in some embodiment, at least one rotation speed sensor190can be encompassed within the motor140, for example by being mounted over or otherwise attached to a portion of the motor shaft142extending through the motor140. According to some embodiments, at least one lateral sub-assembly126comprises at least one rotation speed sensor190. According to some embodiments, at least one rotation speed sensor190is mounted on or otherwise attached to at least one axle, such as the rotation speed sensors190billustrated on both sides of the front axle132aand the rear axle132binFIG.2A. It will be clear that two rotation speed sensors190aand four rotation speed sensors190bare shown inFIG.2Atogether for purpose of illustration only, and that in most cases, a single or a couple of rotation speed sensors190may suffice. In fact, since any change in the rotational speed of any component of the drive assembly124is reflected on any other component of the drive assembly124, it may be sufficient to place a rotation speed sensor190over or attached to any rotating component of the drive assembly124. Nevertheless, a combination of more than one rotation speed sensor190may be desired for redundancy. The at least one rotation speed sensor190is electronically coupled to the control circuitry154, for example via at least one transmission line160, and is configured to generate a signal commensurate with the rotation speed of the component it is coupled to, which in turn is commensurate with the rotation speed of the primary wheels130. According to some embodiments, the at least one rotation speed sensor190comprises an absolute encoder or an incremental encoder. According to some embodiments, the at least one rotation speed sensor190comprises an optical encoder. According to some embodiments, the at least one rotation speed sensor190comprises a mechanical encoder. According to some embodiments, the at least one rotation speed sensor190comprises a magnetic encoder. According to some embodiments, the at least one rotation speed sensor190comprises a capacitance encoder. In use, the at least one rotation speed sensor190provides feedback corresponding to the actual momentary rotation speed of the primary wheels130to the control circuitry154. The control circuitry154is configured to compare the actual momentary speed to a predefined threshold, that can be set by the remote-control device60. If the actual measure rotational speed is either lower or higher than the predefined threshold, corresponding to the desired pre-set rotation speed, the control circuitry154provides controlling signals configured to readjust the motor's140rotation torque, to revert the rotation speed of the primary wheels130back to the pre-set desired speed. Readjustment of the motor's140speed include the ability of the control circuitry to either accelerate or decelerate the motor. Preferably, the momentary speed is sensed by the at least one rotation speed sensor190and neutralized by the control circuitry154via the motor140at a frequency which is sufficiently fast, so that the rotational motion of the primary wheels130is readjusted on the fly in a manner which is transparent to the walker, thereby ensuring that the walker's longitudinal balance is maintained at all times. According to some embodiments, the time period including the steps of acquiring signals from the at least one speed sensor190, and counterbalancing the rotation torque of the motor140by the control circuitry154so as to counter any potential change on the rolling speed of the primary wheels130, is equal to or lower than 0.05 seconds. According to some embodiments, the time period including the steps of acquiring signals from the at least one speed sensor190, and counterbalancing the rotation torque of the motor140by the control circuitry154so as to counter any potential change on the rolling speed of the primary wheels130, is equal to or lower than 0.01 seconds. According to some embodiments, the motorized walking enhancement platform102further comprises a pair of secondary wheels162. Each secondary wheel162is coupled to the base frame104by a lever164, wherein the vertical position of each secondary wheel162relative to the base frame104is displaceable via a lever height regulator170, as shown inFIG.2E. The term “vertical”, as used herein, refers to a direction which is substantially orthogonal to the surface defined by the base frame104, such as the upper frame surface106or the lower frame surface108. Otherwise stated, the term “vertical” refers to a direction orthogonal both to the longitudinal axis138and the lateral axes128. The lever164, may be provided as a rigid pivotable arm, attached to the secondary wheel162at a lever free end, and to the base frame104at lever hinged end166. In some embodiments, the lever hinged end166can be hinged, for example to the lower frame surface108, via hinge180, which can be an H-hinge as illustrated inFIG.2E, or any other type of hinge configured to enable the lever164to pivot about its lever hinged end166. According to some embodiments, the lever free end168may be L-shaped, as illustrated inFIG.2E, to extend sideways away from the edge of the base frame104, so as to offset the secondary wheel162attached thereto away from the side-edge of the base frame104. This may ensure that the secondary wheels164do not contact the frame104, for example while being dispositioned vertically. According to some embodiments, the lever height regulator170may be attached to base from104at a height regulator upper connection point176, and to the lever164at a height regulator lower end178. While the position of the height regulator upper connection point176remains immovable relative to the frame base104at all times, the vertical position of the height regulator lower end178may change relative to the height regulator upper connection point176. Since the secondary wheel162is attached to the lever free end168, and since the lever164is attached in turn to the lever height regulator170, any change in the vertical position of the height regulator lower end178translates to a pivotable movement of the lever164about the lever hinged end166, which in turn translates to vertical displacement of the secondary wheel162. According to some embodiments, the lever height regulator170comprises a pneumatic/hydraulic drive unit270, as shown inFIGS.4A-4B. The pneumatic/hydraulic drive unit270can include a pneumatic/hydraulic piston274vertically movable through a pneumatic/hydraulic cylinder272. The pneumatic/hydraulic cylinder272may be attached to the base frame104at the pneumatic/hydraulic cylinder connection point276, which is the equivalent of the height regulator upper connection point176, while the pneumatic/hydraulic piston may be connected to the lever164at the pneumatic/hydraulic piston lower end278, which is the equivalent of the height regulator lower end178. The term “pneumatic/hydraulic”, as used herein for any component or system, means that the component or system can be implemented either as pneumatic/hydraulic component or system. In some embodiments, the motorized walking enhancement platform102further comprises a pair of actuators182, wherein each actuator182, which can be a pneumatic/hydraulic actuator, is coupled to a corresponding lever height regulator170, for example via a pneumatic/hydraulic lever transmission line260, and is configured to control the vertical position of the height regulator lower end178. Each actuator182can be controllably coupled, for example via a pneumatic/hydraulic transmission lines270, to a corresponding lever height regulator170, such as a pneumatic/hydraulic drive unit270. In some embodiments, each actuator182can further include an actuator sub-controller183, configured to control the operation of the actuator182, for example by diverting the appropriate amount of a pneumatic/hydraulic fluid for operating hydraulic/pneumatic pistons attached to the actuator182. The pneumatic/hydraulic lever transmission line260may serve as a conduit to transmitting pneumatic/hydraulic fluid to and from the pneumatic/hydraulic drive unit270. The control circuitry154may be controllably coupled to the actuator182, for example via transmission lines160, to control the functionality of the actuators182, potentially in communication with the actuator sub-controller183, thereby controlling the vertical position of the secondary wheels162. According to some embodiments, the motorized walking enhancement platform102may further comprise a pair of side extensions114extending upward from the base frame104. The side extensions114can be either integrally formed with the base frame104, or separately formed and affixed to the sides of the base from104. In some embodiments, the side extensions114may be aligned with the foot strap118, such that the lateral strap section117may extend therefrom. In some embodiments, the side extensions114may be aligned with the lever height regulators170, and may include opening through which the lever height regulators170, such as the pneumatic/hydraulic drive units270, may extend—thereby protecting them from external obstacles. According to some embodiments, the pneumatic/hydraulic drive unit270is retained in a retracted state (shown inFIG.4A) while the motorized walking enhancement platform102is not in contact with the ground20, and is configured to move the secondary wheels162downward to a lowered state (shown inFIG.4B) when the motorized walking enhancement platform102contacts the ground, bringing the secondary wheels162in contact with the ground20in this state. According to some embodiments, the primary wheels130are disposed on both sides of the base frame104, having a diameter large enough to extend at their uppermost edges upward relative to the upper frame surface106. Advantageously, this configuration provides a lower and wider foothold, thereby enhancing lateral stability of the motorized walking enhancement platform102over the ground20. According to some embodiments, the diameter of the secondary wheels162is smaller than the diameter of the primary wheels130. According to some embodiments, the motorized walking enhancement platform102further comprises at least one pressure sensor192. According to some embodiments, the front lateral sub-assembly126acomprises at least one front pressure sensor192a, and the read lateral sub-assembly126bcomprises at least one rear pressure sensor192b.FIG.2Ashows an exemplary configuration of two front pressure sensors192acoupled to both sides of the front axle132aor to both front primary wheels130a, and two rear pressure sensors192bcoupled to both sides of the rear axle132bor to both rear primary wheels130b. It will be clear that other configurations are contemplated, such as a single front pressure sensor192acoupled to other portions of the front axle132aor a component of the front non-differential transmission mechanism148a, and a single rear pressure sensor192bcoupled to other portions of the rear axle132bor a component of the rear non-differential transmission mechanism148b. The pressure sensor192are electrically coupled to the control circuitry154, for example via transmission line160, and deliver signals indicating whether the rear primary wheels130band/or front primary wheels130aare in contact with the ground, and/or when they are leaving the ground. The power source184can be used to power at least one component of the motorized walking platforms102, such as the control circuitry154, the motor140, the communication unit156, the at least one rotation speed sensor190, the at least one pressure sensor192, and/or the actuators182. The term “and/or” is inclusive here, meaning “and” as well as “or”. For example, “component A and/or component B” encompasses, component A, component b, and component A with component B; and, such “component A and/or component B” may include other elements as well. According to some embodiments, the secondary wheels162comprise an outer layer which is softer than that of the primary wheels130, thereby acting as a cushion to absorb some of the impacts during walking motion. In some cases, forward or backward excessive skid forces may be applied at the forward positioning of the leading foot on the ground, for example as the sole strikes the ground following the heel strike, or as the heel rises while the sole is still in contact with the ground. Such excessive skid forces may require excessive motor torques that are prohibitive, given the pivotal weight limit of the motorized walking enhancement platform102. According to some embodiments, the motorized walking enhancement platform102further comprises a pneumatic/hydraulic braking system194(seeFIG.2D), configured to assist in neutralizing the skid forces by the front wheels130awhen the skid forces32are higher than a predefined upper threshold. The braking system194includes a pneumatic/hydraulic braking unit196attached to each of the front primary wheels130a. The pneumatic/hydraulic actuator182can be coupled to the pneumatic/hydraulic braking unit196via pneumatic/hydraulic braking transmission line260, which may serve as a conduit to transmitting pneumatic/hydraulic fluid to and from the pneumatic/hydraulic drive unit270. The pneumatic/hydraulic braking unit196is configured to apply counter friction forces on the front wheels130a, so as to alleviate the extra torque burden from the motor140. Advantageously, the same pneumatic/hydraulic actuator182is shared by both the pneumatic/hydraulic drive unit270and the pneumatic/hydraulic braking unit196. The actuator sub-controller183may be further utilized to readjust the amount of pneumatic/hydraulic fluid flowing through each of the lever transmission line260and the braking transmission line261, so as to control the functionality of each of the pneumatic/hydraulic drive unit270and the pneumatic/hydraulic braking unit196as required. Reference is now made toFIGS.6A-6E, schematically showing the longitudinal forces acting between the primary wheels130on the ground20during different phases of a stride or gait cycle. The net forward force30schematically represents the forward driving force applied by the primary wheels130on the ground20so as to advance the platform102forward. In a forward walking action shown inFIG.6A, the rear primary wheels130bstrike the underlying ground20, which may result in forward skid forces32materializing between the rear primary wheels130band the ground20. These skid forces, which affect the rolling speed of the rear primary wheels132b(and consequently, any other rotatable component of the drive assembly124), are immediately sensed by the at least one rotation speed sensor190. The signals are delivered to the control circuitry154, which readjusts the rotation of the motor140so as to apply a reaction force34equal to the skid force32in an opposite direction, thereby neutralizing it so that the net forward force remains unchanged, at a frequency high enough so as to avoid any disturbance that can be felt by the walker. At the phase shown inFIG.6A, the pneumatic/hydraulic drive unit270is shown in the retracted state prior to and during first contact of the rear primary wheels130bwith the ground. The at least one read pressure sensor192bdelivers signals, indicative of the elevated pressure applied thereto by the sole of the foot pressing against the ground20, to the control circuitry154, which in turn controls the actuators182to lower the pneumatic/hydraulic piston274and the secondary wheels162there-along, to the lowered state shown inFIG.6B, during which the secondary wheels162may contact the ground20. The lever height regulators170provide consistent mild force that may support the foot's sole, and absorb shock as the secondary wheels162are being positioned on the ground20. Moreover, the rear primary wheels130b, along with the secondary wheels162, together form a rectangular-like support base on the ground, thereby improving stability of the motorized walking enhancement platform102during the heel-strike phase of the gait cycle. As the front portion of the foot is also lowered inFIG.6B, the front primary wheels also land on the ground20, such that all of the primary wheels130are laid on and roll over the ground20in in the mid-stance phase shown inFIG.6C. Skid forces32aand32bmay be applied by either the front and rear primary wheels130aand130b, respectively. The forces are similarly sensed by the front and rear rotation speed sensor190aand190b, and may in turn be fully or partially neutralized by the front and rear motor reaction forces34aand34b. As shown, the front skid force32bmay be significantly higher than the rear skid force32a, and in excess of a predetermined upper threshold. In such a case, the braking system194also applies a braking system counter force36, which together with the motor reaction force34b, result in a total neutralizing force38which is opposite in direction and equal in magnitude to the front skid force32bsuch that the net forward force30bremains constant. As the front primary wheels130aare also lowered to contact the ground20, as shown inFIG.6C, the lever164may pivot upward to some extent, enabling the secondary wheels162to retain full contact with the ground20, so that all of the primary and secondary wheels130and162, respectively, may contact the ground20and roll forward. While the primary wheels130are actively rotated by the motor140, the secondary wheels passively roll over the ground there-between. The weight of the walker during the positioning of the sole on the ground at the beginning of a step is the source of pneumatic/hydraulic power to operate both the pneumatic/hydraulic drive unit270and the pneumatic/hydraulic braking unit196. For example, 12 kg of the walker's weight may be sufficient to store the required pneumatic/hydraulic power. In some embodiments, the motor140is further configured to provide sensitive fluctuations' counter-force40, for example, via a sensitive motor bracket (not shown), to counter the fluctuations that may originate from the relatively crude braking system194. The control circuitry154is configured to activate the braking system194according to logic and parameters derived from the signals readings of the rotation speed sensors190and the activated counter torques values (i.e., the motor reaction forces32), calculating and timing and progressive pace of application of hydraulic power to the pneumatic/hydraulic braking units194at the front wheels130a. During the push-off phase of the gait cycle shown inFIG.6D, the rear primary wheels130bare lifted up from the ground20as the motorized walking enhancement platform102starts breaking contact with the ground, while the front primary wheels130aare still in contact with the ground20, resulting in rearward skid forces32materializing between the front primary wheels130aand the ground20. The secondary wheels162may remain in a downward state (i.e., in contact with the ground20) while the front primary wheels130aare still pressed against the ground20. The front primary wheels130a, along with the secondary wheels162, together form a rectangular-like support base on the ground, thereby improving stability of the motorized walking enhancement platform102during the heel-lift off phase of the gait cycle. The pneumatic/hydraulic drive units270may discharge the accumulated energy therein, so as to produce adjustable assisting lifting force that may further support forward thrust motion at the end of the step. This assisting force may help in reduction and regulation of the counter torque, in terms of amplitude and/or volatility, which is applied to the motor shaft142by the foot's rolling motion, during positioning of the leading foot on the ground (FIGS.6A-6B) and during the forward thrust motion (FIG.6D). Specifically, the assisting force may reduce the maximum torque requirement from the motor140, thereby enabling overall weight reduction. As shown, the skid force32once again may surpass the predetermined upper threshold, in which case the braking system194will again apply a braking system counter force36, which together with the motor reaction force34, results in a neutralizing force38opposite in direction and equal in magnitude to the front skid force32such that the net forward force30remains constant, while the motor fluctuations' counter-force40may alleviate the fluctuations that may arise from the relatively crude braking system194. FIG.6Eshows the foot in the air, while both pairs of primary wheels130are raised above the ground20. In this state, there is an immediate drop of load on the airborne lateral sub-assemblies126, and the control circuitry154is configured to immediately readjust the torque produced by the motor140to a minimal value, keeping all of the airborne primary wheels130rolling forward in unison at a constant speed, while none of them exerts any forces on the ground20. In this state, both the front and rear pressure sensors192aand192b, respectively, indicate this state and the control circuitry154activates the actuators182to raise the secondary wheels162, via the pneumatic/hydraulic drive units270, to the retracted state. The term “skid force”, as used herein, refers to a component force parallel to the ground20of the force transmitted to each motorized walking enhancement platform102by the walker's leg, which can be in a forward direction during the strike of the heel as shown6A-6B, and backward during the final phase of the step, as shown inFIG.6D. The skid force can vary due to a number of factors, such as wind, random body movements, and the like. The component of the force which is perpendicular to the ground20is cancelled by the reaction of the ground, while the skidding force32is compensated by artificially created opposite reaction force34. The drive assembly124, including a longitudinal-centric motor140, with two speed reduction units144a,144bmounted on both sides of the motor140, and two non-differential transmission mechanisms148a,148bconfigured to transmit power from the longitudinally oriented drive line136to the front and the rear transverse driving axles132aand132b, respectively, can automatically allocate all torque produced between both axles132aand132baccording to their instantaneous load demand along the full step or gait cycle. For example, all torque may be allocated to the front primary wheels130aduring the forward thrust motion (seeFIG.6D), all torque can be allocated to the rear primary wheels132bduring the heel strike instant (seeFIG.6A), and all torque can be allocated to all primary wheels132according to an adaptive ratio during the backwards movement of the platform with all primary wheels132on the ground (seeFIG.6C). When compared to other walking propulsion solutions known in the art, the above-mentioned configuration advantageously offers the most effective and efficient locomotive solution for motorized-assisted walking with the minimal weight possible. For example, other previously disclosed platform propulsion configurations that tie different motors, gears and torque transmission components, with a partial number of wheels, cannot be as effective and efficient as the currently disclosed configuration, as when all of the maximal torque produced by the motor needs to be allocated only to the front wheels during the forward thrust motion, previously disclosed configurations render mute the motors that are idled because they are coupled only to the rear wheels, or they may otherwise not couple directly or in a most-efficient manner also to the front wheels. Such inferior configurations render the idled motors and all relating power-transmission modules that are not propelling the wheels that touch the ground in each step, a wasted and unused weight. The currently disclosed configuration, on the other hand, provides a single propulsion unit—in the form of drive assembly124, configured to both produce and deliver, through all of the transmitting components such as speed reduction units144and non-differential transmission mechanisms148, the maximal torque possible per unit weight and per platform dimensions, and allocate all of the torque in high fidelity and maximum mechanical efficiency to the front or to the rear primary wheels130a,130b, or to both, as is required at each instant of the step or gait cycle. The mobility enhancement system100is dimensioned to be utilized over a ground20having a relatively low slope, but able to overcome height inconsistencies and random obstacles having a vertical height of about up to 1.5 cm, and allow for bridging planar gaps in the pavement surface of about 2.5 cm in width. Advantageously, all of the primary wheels130are configured to roll only along a longitudinal direction, thereby simplifying the structure and minimizing the weight of the mobility enhancement system100, not requiring any complementary components or mechanisms for lateral movement thereof. The contact angle and the skid forces between the foot and the ground20in forward walking motion varies from step to step due to a number of factors, such as the gait phase, the profile of the terrain, the behavior of the walker and so on. The current mechanism ensures that regardless of such factors, the influence of the skid forces32on the rotation speed of the primary wheels130is measured at any moment and countered by reactions forces34so as to maintain a constant rolling speed. Advantageously, the lowered state of the lever height regulators170enables the secondary wheels162to be in contact with the ground along with the rear primary wheels130band/or the front primary wheels130a, so that a minimum of four contact points with the ground20is maintained also during lowering or raising the foot toward or away from the ground20, thereby significantly enhancing platform102stability in these stages of the gait cycle. Retaining the secondary wheels162in a retracted state when the foot is in the air, may advantageously protect them from tangling with other potential environmental obstacles. Advantageously, the braking system194based on a self-energizing pneumatic/hydraulic system, is of significantly superior power to weight ratio relative to that of the electric motor140, and can be offset to significant extent in terms of absolute weight burden on the entire motorized walking enhancement platform102. Furthermore, the reduced output torque requirement from the drive assembly124may provide additional meaningful advantages, such as improved durability and resiliency of the drive assembly124, reduced drive assembly124dimensions that allow the primary wheels130to be provided with smaller diameters, thereby lowering the height of the walker's feet above the ground so as to improve the walker's stability, on top of enabling further reduction in the motorized walking enhancement platform's102weight. Reference is now made toFIG.5, showing a side view of a motorized walking enhancement platform102with a spring-type lever height regulator370. According to some embodiments, the lever height regulator170comprises a spring370. The spring370may be attached to the base frame104at the spring upper connection point376, which is the equivalent of the height regulator upper connection point176, and connected to the lever164at spring lower end378, which is the equivalent of the height regulator lower end178. It will be understood that any type of a lever height regulator170may be connected at the height regulator upper connection point176direction to the base frame104, or indirectly via attachment to another component affixed to the base frame104, such as the side extension114. According to some embodiments, a lever height regulator170, such as the spring370, may be displaceable from a free state, in which it may be biased downward (i.e., toward the ground20), such that the secondary wheels162may be positioned vertically lower than the lowermost edge of the primary wheels130, and a pressed state, wherein the lever height regulator170moves vertically upward, pressing the secondary wheels162to full contact with the ground20. Reference is now made toFIGS.7A-7E, showing different states of a motorized walking enhancement platform102equipped with a spring370in different phases of a stride or gait cycle. At the phase shown inFIG.7A, the rear primary wheels130bmake first contact with the ground20. The spring270is shown in the free state, wherein the lever164and the secondary wheels162are biased downward, while the secondary wheels162do not yet reach the ground20itself. Further lowering the front portion of the foot, as shown inFIG.7B, initiates contact of the secondary wheels with the ground20while the front primary wheels130amay still be offset from the ground20. As the front primary wheels130aare also lowered to contact the ground20, as shown inFIG.7C, all of the primary and secondary wheels130and162, respectively, are in contact the ground20and roll forward. When the rear primary wheels130bare lifted upward as shown inFIG.7D, the secondary wheels162may remain in a pressed state (i.e., in contact with the ground20) while the front primary wheels130aare still pressed against the ground20. The spring370may discharge the accumulated energy therein, so as to produce adjustable assisting lifting force that may further support forward thrust motion at the end of the step. This assisting force may help in reduction and regulation of the counter torque, in terms of amplitude and/or volatility, which is applied to the motor shaft142by the foot's rolling motion, during positioning of the leading foot on the ground (FIGS.7A-7B) and during the forward thrust motion (FIG.7D). Specifically, the assisting force may reduce the maximum torque requirement from the motor140, thereby enabling overall weight reduction. When the front primary wheels130aare lifted as well, as shown inFIG.7E, the spring370may extend to the free state. While the spring370may lack the advantage offered by a pneumatic/hydraulic drive unit270, in keeping the secondary wheels162in a retracted state when the foot is in the air, it may provide an alternative advantage by providing a simpler structural configuration, in which actuators and pressure sensors are not required, thereby potentially simplifying structural complexity, lowering costs and lowering the overall weight of the mobility enhancement system100. While the pneumatic/hydraulic drive unit270is shown inFIGS.6A-6Eto be movable from a retracted state when the foot is in the air, to the lowered state in which the secondary wheels162may contact the ground, it will be clear that alternatively, the motorized walking enhancement platform102may be provided with pneumatic/hydraulic drive units270configured to be biased downward in a free state when the foot is in the air, and the pneumatic/hydraulic piston may be movable upward into the pneumatic/hydraulic cylinder272to a pressed state, during which the secondary wheels162may contact the ground20, similar to the states shown for a spring370inFIGS.7A-7E. While pneumatic/hydraulic drive units270and spring370are described herein above, it will be clear that other forms of lever height regulators may be similarly applicable, such as motorized or robotic arms controlled by the control circuitry154. In some embodiments, a motorized walking enhancement platform102provided with a lever height regulator in the form of a spring370(or a motorized arm) can be accompanied by a separate braking system194. These solutions may be inferior to pneumatic/hydraulic drive units270as in such cases, the pneumatic/hydraulic actuator182is not shared by a pneumatic/hydraulic drive unit270. In other embodiments, a motorized walking enhancement platform102provided with a lever height regulator in the form of a spring370(or a motorized arm) may be devoid of a braking system194, which may result in inferior functionality of the mobility enhancement system100due to its inability to properly compensate for extreme magnitudes of skid forces32, as elaborated herein above. Nevertheless, such embodiments may be applicable if the system100is designed in such a manner that excessive skid forces32are not expected to form or to cause an overwhelming problem that cannot be properly compensated by the motor140alone. Reference is now made toFIGS.8A-8B, showing different implementations of non-differential transmission mechanisms148. According to some embodiments, the non-differential transmission mechanism148comprises a worm-gear transmission mechanism248, as shown inFIG.8A. The longitudinal shaft member146can include a longitudinal worm gear, which is meshed with a lateral worm gear252of the axle132. According to some embodiments, the non-differential transmission mechanism148comprises a beveled-gear transmission mechanism348, as shown inFIG.8B. The longitudinal shaft member146can include a longitudinal bevel gear350, meshed at one side with a lateral bevel gear352of the axle132. While two exemplary implementations for non-differential transmission mechanism148are shown inFIGS.8A-8B, it will be clear that other non-differential transmission mechanism148known in the art for perpendicular transfer of rotational movement, are contemplated, including mechanisms that include various bevel gears, helical gears, crown gears, and the like. According to some embodiments, the motorized walking enhancement system102may decelerate to a full stop, finally locking all primary wheels130and preventing rotational movement thereof. This may be required in cases in which the walker is interested to prevent such rolling motion, for example during step-walking. In such cases, the walker may send a command via the remote-control device60to lock the wheels. The command is sent, for example wirelessly, to both control circuitries154of both motorizes walking enhancement platforms102, which decelerate the motor130up to a full stop, and further locks the primary wheels130by applying efficient braking mechanisms (not shown) as known in the art. A command to unlock and reactivate the rolling motion of the mobility enhancement system100may be sent in the same manner via the remote-control device60, for example once the walker reached a relatively flat ground profile. Reference is now made toFIGS.9-10B, showing different types of pneumatic/hydraulic breaking units196.FIG.9shows a schematic side view of a pneumatic/hydraulic drum breaking unit596. Each of the front primary wheels130amay be provided with a drum131affixed thereto and rotatable therewith. A pneumatic/hydraulic drum breaking unit596comprises a bi-directional cylinder573provided with two opposite pneumatic/hydraulic pistons575extending from opposite sides of the cylinder573and radially movable outward in directions50. The pistons575are attached to brake shoes595provided with brake pads or linens533attached thereto and extending radially outward. The brake pads533are spaced away from the edges of the drum131in a relaxed state. When the braking system194is actuated, pressure is applied by air or hydraulic fluid, such as oil, in the radially outward directions60, pushing the pistons575along with the brake shoes595radially outward, pressing the brake pads533against the edges of the drum131. The friction between the brake pads533and the drum131causes the drum to stop rotating, or alternatively, hinders the rotational movement so as to lower its rotational speed, as a function of the extent to which the brake pads533are pressed against the drum131. FIGS.10A and10Bshows a schematic side view and a partial sectional view of a pneumatic/hydraulic disc braking unit696. Each of the front primary wheels130amay be provided with a disc133affixed thereto and rotatable therewith. A pneumatic/hydraulic disc braking unit596comprises a caliper assembly698, which includes a bi-directional cylinder673provided with pneumatic/hydraulic pistons575disposed laterally on both sides of the disk133, and laterally movable toward or away from the disc133in directions56. The pistons575are attached to brake pads or linens633, which are spaced away from the sidewalls of the disc133in a relaxed state. When the braking system194is actuated, pressure is applied by air or hydraulic fluid, such as oil, in directions54, pushing the pistons675along with the brake pads633against the sidewalls of the disc133. The friction between the brake pads633and the disc133causes the disc133to stop rotating, or alternatively, hinders the rotational movement so as to lower its rotational speed, as a function of the extent to which the brake pads633are pressed against the disc133. While two braking mechanisms, such as a pneumatic/hydraulic drum braking mechanism596and a pneumatic/hydraulic disc braking mechanism696are described and illustrated herein, it will be clear that these specific mechanisms are provided for the sake of example only, and that other types of pneumatic or hydraulic braking mechanisms known in the art, are contemplated for the braking unit194. According to some embodiments, the motorized walking enhancement platform102further comprises a protective housing (not shown) that can be attached to the lower frame surface108and encompass components attached thereto, such as components of the drive assembly124and the control circuitry154, so as to protect such components from being damaged by obstacle in the surrounding environment. According to some embodiments, various components of the motorized walking enhancement platform102are waterproof, configured to withstand at least rainy weather. Advantageously, a mobility enhancement system100designed for rolling while walking, preferably that is also lightweight and easily controllable, would provide a safe walking environment for walker regardless of their level of expertise. Advantageously, the structure and configuration of the various components of the drive assembly, including the motor140, the speed reduction units144, the non-differential transmission mechanisms148, the drive line136and axles132, and the primary wheels130, may together provide superior characteristics in terms maximal torque, accuracy of speed control, platform102stability and traction, long-term durability, all of which provided in minimal weight of the overall platforms102. It is appreciated that various components of the mobility walking enhancement platform102are made of polymeric materials, lightweight metal materials, or combinations thereof. According to some embodiments, the weight of each motorized walking enhancement platform102, excluding components that are not carried by the user's foot, such as the leg brace186and the power source184, is equal to or lower than 2.5 kg, thereby allowing sufficiently comfortable swinging of the motorized walking enhancement platform102at the end of each step up to the beginning of the subsequent step. According to some embodiments, the weight of each motorized walking enhancement platform102, excluding components that are not carried by the user's foot, such as the leg brace186and the power source184, is equal to or lower than 2 kg. The motorized walking enhancement platforms102amplify the movement of the user. This walking movement enhancement is similar to that of walking on an airport moving walkway. While the user is walking normally, the actual speed of advancement is faster, without expending extra effort. Each of the points of action-and-reaction that underpin the full motion function of the mobility enhancement system100, constitutes a contact point of the wheels130with the ground20, wherein all forces, either internal and external, interact and need to be balanced instantaneously, in order to maintain the walker's longitudinal balance and stability, and apply the net forward force30that is required to maintain the predefined constant steady rolling speed of both motorized walking enhancement platforms102. The controllable measurement and instantaneous readjustment mechanism, configured to keep all of the primary wheels130rolling at a constant preset speed all the times, provides a substantially stable movement of the motorized enhancement walking platforms102on the ground20at any instant. The digital control function of the control circuitry154, following signals sensed by the rotations speed sensors190commensurate to incremental changes in the rotation speed of components of the drive assembly124, such as the motor shaft140or the axles132, responds by incrementally restoring the platform's102rolling speed in a proportionally incremental manner, corresponding to the motor's140driving torque, through the motor's140electric drive unit. This enables the walker to maintain natural walking balance without needing to make any particular effort. The overall configuration of the components of the motorized walking enhancement platforms102as described herein above, advantageously obviates the use of additional or higher-weight components included in alternative devices known in the art, thereby simplifying usage and optimizing the weight balance of the current system100enabling simpler adoption even by unexperienced or first-time users. It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination or as suitable in any other described embodiment of the invention. No feature described in the context of an embodiment is to be considered an essential feature of that embodiment, unless explicitly specified as such. Although the invention is described in conjunction with specific embodiments thereof, it is evident that numerous alternatives, modifications and variations that are apparent to those skilled in the art may exist. It is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth herein. Other embodiments may be practiced, and an embodiment may be carried out in various ways. Accordingly, the invention embraces all such alternatives, modifications and variations that fall within the scope of the appended claims.
65,202
11857865
DETAILED DESCRIPTION While this invention is susceptible of embodiment in many different forms, there are shown in the drawings, and will be described herein in detail, specific embodiments thereof with the understanding that the present disclosure is to be considered as an exemplification of the principles of the invention and is not intended to limit the invention to the specific embodiments illustrated. The application incorporates by reference in their entireties U.S. Provisional Application Ser. No. 62/681,267, filed Jun. 6, 2018 and U.S. Ser. No. 16/366,781 filed, Mar. 27, 2019. Although the present specification is advantageously applied to the assembly of a Ga-ga pit, the invention encompasses any other type of game for which a fenced-in (or “walled-in”) playing area is desired. FIG.1shows a fenced-in playing area10that is formed with interlocking first panels14a,14b,14c,14dand interlocking second panels16a,16b,16c. Each panel includes a downward open vertical slot adjacent one end of the panel and an upward open vertical slot adjacent an opposite end of the panel. Each slot has a length of about one half or more of the height of its panel. In order to interlock, the downward open vertical slot of one panel passes through the upward open vertical slot of an adjacent panel and fits over the adjacent panel. The upward open vertical slot simultaneously passes through the downward open vertical slot of the one panel and fits over the one panel. As shown inFIG.1, the foreground panel14aincludes a downward slot20aand the adjacent panel16ahas an upward slot24a. The panel14ahas been fit down onto the panel16a. The downward slot20afits over the panel16aand the upward slot24afits over the panel14a. On an opposite end of the panel14a, an adjacent panel14bhas been fit down onto the panel14a. The adjacent panel14bhas a downward slot20bthat passes through an upward slot24bof the panel14aand fits over the panel14a. Simultaneously, the upward slot24bpasses through the downward slot20band fits over the panel14b. The interlocking of downward slots and upward slot is repeated at each joint between panels. FIG.2illustrates the interlocking first panels14a,14b,14c,14d,14e. The panel14ais illustrated with the understanding that the panels14b,14c,14d,14eare identical. The panel14ais substantially a rectangular plate having a length L of about 96 inches and a height H of about 27 inches. Adjacent one end is the downward open vertical slot20a. Adjacent an opposite end is the upward open vertical slot24b. The slots have a length LL in the height direction of about 14 inches. The slots have a width W of about 0.9 inches. The panel has a thickness of about ½ inch. Six hand holes28are arranged spaced apart, three adjacent to an upper edge30of the panel and three adjacent to a lower edge32of the panel. The hand holes are about 4 inches long and wide enough for the insertion of human fingers to lift and handle the panel. The panel can be rotated 180 degrees to where the upward open vertical slot24bbecomes a downward open vertical slot and the downward open vertical slot20abecomes an upward open vertical slot. The hand holes28being along both the top and bottom edges facilitate lifting the panel no matter the orientation of the panel. FIG.3illustrates the interlocking second panels16a,16b,16c. The panel16ais illustrated with the understanding that the panels16b,16care identical. The panel16ais substantially a rectangular plate having a length L of about 96 inches and a height H of about 27 inches. Adjacent one end is a downward open vertical slot20c. Adjacent an opposite end is the upward open vertical slot24a. The slots have a length LL in the height direction of about 14 inches. The slots have a width W of about 0.9 inches. The panel16ahas a thickness of about ½ inch. The panel has an upper edge40and a lower edge42. A tapered recess36, is indented from the lower edge42. The recess36has a depth D of about 11 inches. It has a width U at the lower edge42of about 35 inches and a width X at a top of the recess of about 24 inches. Six hand holes28are arranged spaced apart, two adjacent to the lower edge42of the panel16a, adjacent opposite ends of the panel, and one just above the recess36, and three adjacent to a upper edge40of the panel. The hand holes are about 4 inches long and wide enough for the insertion of human fingers to lift and handle the panel. The panel16acan be rotated 180 degrees to where the downward open vertical slot24abecomes an upward open vertical slot and the upward slot20cbecomes a downward open vertical slot. The hand holes28being along both the top and bottom edges facilitate lifting the panel no matter the orientation of the panel. As shown inFIG.3the recess36is facing downward. In this orientation, the panel16aprovides a goal opening50(FIG.1) for a game within the fenced-in area where an object of the game is to pass a ball or puck or the like through the goal opening50, similar to hockey or soccer. When the panel16ais rotated 180 degrees about the horizontal axis, it takes on the orientation of panel16c(FIG.1) where the recess36functions as a lowered entry into the fenced-in area, especially for smaller children who would not be able to step over the full height of the panel. The panels14a,14b,14c,14d,14e,16a,16b,16care advantageously composed of high density polyethylene for durability and a light weight. As can be understood, the interlocking first panels and the interlocking second panels can be selected to form a pre-selected fenced-in area. By using all interlocking first panels14a,14b, etc., and one interlocking second panel16cin the orientation of panel16c, when the panels are interlocked using the downward and upward slots, a substantially solid fenced-in area with a lowered entryway can be provided. The number of panels can be selected to form a square, a triangle, a hexagon and octagon or other polygon shapes. By using interlocking first panels14a,14b, etc., and one or more interlocking second panel16ain the orientation of panel16ainFIG.1, when the panels are interlocked using the downward and upward open vertical slots, a substantially solid fenced-in area with one or more goal openings50can be provided. An additional panel16c, in the orientation of panel16cinFIG.1, can also be provided for a lowered entryway. The number of panels can be selected to form a square, a triangle, a hexagon and octagon or other polygon shapes. FIG.4shows an alternate embodiment of the first type panel14aawhich can replace one or more or all of the panels14a,14b,14c,14d,14eshown inFIG.1. This panel is identical to the first panel14aand like panels except an additional downward open vertical slot20aaand an additional upward open vertical slot24bbare added. These slots have substantially the same dimensions as the slots20a,24b. The slots20aa,24bbare located along the length of the panel between the slots20a,24b. The addition of these slots adds more flexibility to the interlocking of the panels by allowing for a shorter panel (horizontally) by using the inside slots20aa,24bband also allows for the use of the stand70as shown inFIG.5. Additionally, the panels16a,16b,16cand like panels can also have the two additional slots, one upward open vertical slot and one downward open vertical slot, between the slots24a,20c. FIG.5illustrates a stand70. The stand70includes three spaced apart hand holes28, and an upward facing vertical slot72of about ½ the height of the stand. The stand has a triangular shape with a narrow top edge76and a wider bottom edge78. Other shapes for the stand are encompassed by the invention. The stand70is shown dashed inFIG.4. As shown inFIG.4, the upward facing vertical slot72of the stand and the downward facing vertical slot20aaof the panel mutually interlock. The stand bottom edge rests on the ground and supports the panel14aa. The stand could just as well be mutually interlocked with the slot20aof the panel14aaor any other panel shown inFIG.1. The stands allow for one or both ends of a panel, even if that end is not interlocked with an adjacent panel, to be nonetheless supported in a vertical orientation. A stand can be of a lesser height than the panel supported by the stand is also encompassed by the invention. The stands70provide opportunity to convert the traditional octagonal pit design into individual free standing entities. The individual panels can then be utilized for other games and sports. The stands allow easy transformation into a variety of shapes other than for Ga-ga Ball. The stands allow panels to be used as independent units or connected in a linear design. A long barrier can be created or can be used to form 90° angles. From the foregoing, it will be observed that numerous variations and modifications may be effected without departing from the spirit and scope of the invention. It is to be understood that no limitation with respect to the specific apparatus illustrated herein is intended or should be inferred.
9,025
11857866
DETAILED DESCRIPTION OF THE DISCLOSURE Some embodiments of the present disclosure are directed to virtual vehicle operation in a virtual environment. More particularly, certain embodiments of the present disclosure provide systems and methods for training and applying virtual occurrences with modifiable outcomes to a virtual character using telematics data of one or more real trips. Merely by way of example, the present disclosure has been applied to vehicle operation in a vehicle environment, but it would be recognized that the present disclosure has much broader range of applicability. One or More Systems for Updating a Character Profile of a Virtual Character According to Various Embodiments FIG.1is a simplified diagram showing a system100for updating a character profile of a virtual character of a telematics-based game, according to various embodiments of the present disclosure. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. In some examples, the system100includes a virtual occurrence generating module102, an outcome determining module104, a virtual trip generating module106, a trip success prediction module108, a vehicle condition module110, a presenting module112, and a character profile updating module114. In certain examples, the system100is configured to implement method200ofFIG.2. Although the above has been shown using a selected group of components, there can be many alternatives, modifications, and variations. In some examples, some of the components may be expanded and/or combined. Some components may be removed. Other components may be inserted to those noted above. Depending upon the embodiment, the arrangement of components may be interchanged with others replaced. In various embodiments, the virtual occurrence generating module102is configured to generate, such as based at least in part upon a character profile of a virtual character, one or more virtual occurrences to be encountered by the virtual character. In some examples, the virtual occurrence generating module102is configured to generate the one or more virtual occurrences based further in part upon one or more unlocked regions of a virtual map of the telematics-based game. In some examples, each virtual occurrence of the one or more virtual occurrences includes one or more virtual obstacles to be encountered by the virtual character. In some examples, the virtual character includes a plurality of virtual skills includes a virtual steering skill, a virtual braking skill, a virtual speeding skill, and/or a virtual focus skill. In some examples, each virtual occurrence of the one or more virtual occurrences includes a steering difficulty corresponding to one or more virtual steering obstacles, a braking difficulty corresponding to one or more virtual braking obstacles, a speeding difficulty corresponding to one or more virtual speeding obstacles, and/or a focus difficulty corresponding to one or more virtual focus obstacles. In various embodiments, the outcome determining module104is configured to determine, such as based at least in part upon a plurality of virtual ratings of the virtual character, one or more outcomes associated with the one or more virtual occurrences. In various examples, the outcome determining module104is further configured to update, upon receiving the user's selection of the second user-selectable command, the one or more outcomes according to a predetermined adjustment. In some examples, the outcome determining module104is configured to determine the one or more outcomes based at least in part upon the steering difficulty, the braking difficulty, the speeding difficulty, the focus difficulty, a virtual steering rating of the virtual steering skill, a virtual braking rating of the virtual braking skill, a virtual speeding rating of the virtual speeding skill, and/or a virtual focus rating of the virtual focus skill. In some examples, each outcome of the one or more outcomes correspond to a likelihood of success of the virtual character overcoming the one or more virtual obstacles in each virtual occurrence of the one or more virtual occurrences. In various embodiments, the virtual trip generating module106is configured to generate a virtual trip including the one or more virtual occurrences with the associated one or more outcomes. In various embodiments, the trip success prediction module108is configured to determine, such as based at least in part upon the one or more outcomes, a trip success prediction of the virtual character completing the virtual trip. In various embodiments, the vehicle condition module110is configured to determine, such as based at least in part upon the one or more outcomes, a predicted change in vehicle condition, the predicted change in vehicle condition being indicative of a degree of damage to be sustained by the virtual vehicle during the virtual trip. In various embodiments, the presenting module112is configured to present the trip success prediction, the predicted change in vehicle condition, a first user-selectable command, and a second user-selectable command to the user. In some examples, the presenting module112is further configured to present the updated character profile to the user. In some examples, the presenting module112is further configured to present the updated vehicle condition of the virtual vehicle. In various embodiments, the character profile updating module114is configured to update, upon receiving the user's selection of the first user-selectable command, the character profile by at least initiating the virtual trip with the virtual character. In various examples, the character profile updating module114is further configured to update, upon receiving the updated one or more outcomes, the character profile by at least initiating the virtual trip with the virtual character based on the updated one or more outcomes. In some examples, the character profile updating module114is configured to update a vehicle condition of the virtual vehicle based on the predicted change in vehicle condition. One or More Methods for Updating a Character Profile of a Virtual Character According to Various Embodiments FIG.2is a simplified method200for updating a character profile of a virtual character of a telematics-based game, according to various embodiments of the present disclosure. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The method200includes a process202of generating one or more virtual occurrences, a process204of determining one or more outcomes, a process206of generating a virtual trip, a process208of determining a trip success prediction, a process210of determining a predicted change in vehicle condition, a process212of presenting the trip success prediction, a process214of updating the character profile, a process216of updating the one or more outcomes, a process218of updating the character profile, and a process220of presenting the updated character profile. In certain examples, the method200is configured to be implemented by system100ofFIG.1. Although the above has been shown using a selected group of processes for the method, there can be many alternatives, modifications, and variations. In some examples, some of the processes may be expanded and/or combined. Other processes may be inserted to those noted above. Depending upon the embodiment, the sequence of processes may be interchanged with others replaced. In some examples, some or all processes of the method are performed by a computing device or a processor directed by instructions stored in memory. As an example, some or all processes of the method are performed according to instructions stored in a non-transitory computer-readable medium. In various embodiments, the process202of generating one or more virtual occurrences includes generating, such as based at least in part upon a character profile of a virtual character, one or more virtual occurrences to be encountered by the virtual character. In some embodiments, the process202of generating the one or more virtual occurrences includes generating the one or more virtual occurrences based further in part upon one or more unlocked regions of a virtual map of the telematics-based game. In some examples, each virtual occurrence of the one or more virtual occurrences includes a steering difficulty corresponding to one or more virtual steering obstacles, a braking difficulty corresponding to one or more virtual braking obstacles, a speeding difficulty corresponding to one or more virtual speeding obstacles, and/or a focus difficulty corresponding to one or more virtual focus obstacles. In some examples, each virtual occurrence of the one or more virtual occurrences includes one or more virtual obstacles to be encountered by the virtual character. In various embodiments, the process204of determining one or more outcomes includes determining, such as based at least in part upon a plurality of virtual ratings of the virtual character, one or more outcomes associated with the one or more virtual occurrences. In some examples, the process204of determining the one or more outcomes includes determining the one or more outcomes based at least in part upon the steering difficulty, the braking difficulty, the speeding difficulty, the focus difficulty, a virtual steering rating of the virtual steering skill, a virtual braking rating of the virtual braking skill, a virtual speeding rating of the virtual speeding skill, and/or a virtual focus rating of the virtual focus skill. In various embodiments, the process206of generating a virtual trip includes generating a virtual trip including the one or more virtual occurrences with the associated one or more outcomes. In various embodiments, the process208of determining a trip success prediction includes determining, such as based at least in part upon the one or more outcomes, a trip success prediction of the virtual character completing the virtual trip. In various embodiments, the process210of determining a predicted change in vehicle condition includes determining, such as based at least in part upon the one or more outcomes, a predicted change in vehicle condition of a virtual vehicle. In various examples, the predicted change in vehicle condition being indicative of a degree of damage to be sustained by the virtual vehicle during the virtual trip. In various embodiments, the process212of presenting the trip success prediction includes presenting the trip success prediction, the predicted change in vehicle condition, a first user-selectable command, and a second user-selectable command to the user. In various embodiments, the process214of updating the character profile includes updating, upon receiving the user's selection of the first user-selectable command, the character profile by at least initiating the virtual trip with the virtual character. In some embodiments, the process214of updating the character profile includes updating a vehicle condition of the virtual vehicle based on the predicted change in vehicle condition. In various embodiments, the process216of updating the one or more outcomes includes updating, upon receiving the user's selection of the second user-selectable command, the one or more outcomes according to a predetermined adjustment. The predetermined adjustment may be pre-determined for a particular in-game item associated with the second user-selectable command, such as a boost item. In various embodiments, the process218of updating the character profile includes updating, upon receiving the user's selection of the second user-selectable command the character profile by at least initiating the virtual trip with the virtual character based on the updated one or more outcomes. In some embodiments, the process218of updating the character profile includes updating a vehicle condition of the virtual vehicle based on the predicted change in vehicle condition. In various embodiments, the process220of presenting the updated character profile includes presenting the updated character profile to the user. In some embodiments, the process220of presenting the updated character profile includes presenting the updated vehicle condition of the virtual vehicle. One or More Systems for Training a Virtual Driver According to Various Embodiments FIG.3is a simplified diagram showing a system for training a virtual driver, according to some embodiments. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. In some examples, the system300includes a data receiving module302, a score determining module304, an experience determining module306, and an experience applying module308. In certain examples, the system300is configured to implement method400ofFIG.4. Although the above has been shown using a selected group of components, there can be many alternatives, modifications, and variations. For example, some of the components may be expanded and/or combined. Some components may be removed. Other components may be inserted to those noted above. Depending upon the embodiment, the arrangement of components may be interchanged with others replaced. In some embodiments, the data receiving module302is configured to receive telematics data associated with a real-world driver. In some examples, the data receiving module302is configured to receive telematics data associated with one or more real trips during which the real-world driver (e.g., a user or player) operated a real vehicle. In certain examples, the telematics data are collected via one or more sensors associated with the real vehicle and/or with a mobile device associated with the user. In various examples, the telematics data are received in real-time, or in near real-time, with the collection thereof, such as during the commencement of the one or more real trips. In some embodiments, the score determining module304is configured to determine one or more driving scores corresponding to one or more real-world driving characteristics based at least in part upon the telematics data. A characteristic may also be referred to as a trait or a skill. In various examples, the one or more real-world driving characteristics includes a braking characteristic, a steering characteristic, a speeding characteristic, and/or a focus characteristic. In some examples, the braking characteristic corresponds to the real-world driver's ability to decelerate the real vehicle upon encountering braking obstacles, such as T-junctions or pedestrian crossings. In some examples, the steering characteristic corresponds to the real-world driver's ability to steer the real vehicle upon encountering steering obstacles, such as on-road objects (e.g., potholes, road kills) or sharp turns. In some examples, the speeding characteristic corresponds to the real-world driver's ability to decelerate the real vehicle upon encountering speeding obstacles, such as instances of the real vehicle operated by the user is faster than a speed limit. In some examples, the focus characteristic corresponds to the real-world driver's ability to maintain or regain focus while operating the real vehicle upon encountering focus obstacles, such as when the user is about to use their phone. In some embodiments, the experience determining module306is configured to determine one or more virtual experiences for a telematics-based game. A virtual experience may be referred to as a virtual occurrence or virtual event. In some examples, the experience determining module306is configured to determine the one or more virtual experiences based in part upon a character profile of a virtual character. For example, the experience determining module306is configured to determine the one or more virtual experiences based in part upon a one or more skill ratings (or levels) of a plurality of virtual skills (e.g., steering, braking, speeding, focus), and/or one or more unlocked regions of a virtual game map. In some embodiments, the experience applying module308is configured to apply the one or more virtual experiences to a pre-selected virtual driver to train the virtual driver. In some examples, the experience applying module308is configured to initiate the one or more virtual experiences for a virtual character, such as one selected by a user. In various examples, a virtual experience includes a virtual trip, a virtual scene, a virtual occurrence, a virtual event, a virtual incident, a virtual mini-game, and/or a virtual interaction. For example, a virtual trip includes one or more virtual obstacles configured to be encountered by the virtual character, where the virtual character may succeed in overcoming based on a plurality of ratings of a plurality of virtual characteristics associated with the virtual character. One or More Methods for Training a Virtual Driver According to Various Embodiments FIG.4is a simplified diagram showing a method for training a virtual driver, according to some embodiments. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. In certain examples, the method400is implemented by the system300ofFIG.3. In some examples, the method400includes a process402of receiving telematics data associated with a real-world driver, a process404of determining one or more driving scores corresponding to one or more real-world driving characteristics based at least in part upon the telematics data, a process406of determining one or more virtual experiences corresponding to one or more virtual driving characteristics bases at least in part upon the one or more driving scores, and a process408of applying the one or more virtual experiences to a pre-selected virtual driver to train the virtual driver. Although the above has been shown using a selected group of processes for the method, there can be many alternatives, modifications, and variations. For example, some of the processes may be expanded and/or combined. Other processes may be inserted to those noted above. Some processes may be removed. Depending upon the embodiment, the sequence of processes may be interchanged with others replaced. In some embodiments, the process402of receiving telematics data associated with a real-world driver includes receiving telematics data associated with one or more real trips during which the real-world driver (e.g., a user or player) operated a real vehicle. In certain examples, the telematics data are collected via one or more sensors associated with the real vehicle and/or with a mobile device associated with the user. In various examples, the telematics data are received in real-time, or in near real-time, with the collection thereof, such as during the commencement of the one or more real trips. In some embodiments, the process404of determining one or more driving scores includes determining driving scores for a braking characteristic, a steering characteristic, a speeding characteristic, and/or a focus characteristic. In some examples, the braking characteristic corresponds to the real-world driver's ability to decelerate the real vehicle upon encountering braking obstacles, such as T-junctions or pedestrian crossings. In some examples, the steering characteristic corresponds to the real-world driver's ability to steer the real vehicle upon encountering steering obstacles, such as on-road objects (e.g., potholes, road kills) or sharp turns. In some examples, the speeding characteristic corresponds to the real-world driver's ability to decelerate the real vehicle upon encountering speeding obstacles, such as instances of the real vehicle operated by the user is faster than a speed limit. In some examples, the focus characteristic corresponds to the real-world driver's ability to maintain or regain focus while operating the real vehicle upon encountering focus obstacles, such as when the user is about to use their phone. In some embodiments, the process406of determining one or more virtual experiences includes determining the one or more virtual experiences based in part upon a character profile of a virtual character. For example, determining the one or more virtual experiences includes determining the one or more virtual experiences based in part upon a one or more skill ratings (or levels) of a plurality of virtual skills (e.g., steering, braking, speeding, focus), and/or one or more unlocked regions of a virtual game map. In some embodiments, the process408of applying the one or more virtual experiences includes applying the one or more virtual experiences to a pre-selected virtual driver to train the virtual driver. In some examples, the process408of applying the one or more virtual experiences includes initiating the one or more virtual experiences for a virtual character, such as one selected by a user. In various examples, a virtual experience includes a virtual trip, a virtual scene, a virtual occurrence, a virtual event, a virtual incident, a virtual mini-game, and/or a virtual interaction. For example, a virtual trip includes one or more virtual obstacles configured to be encountered by the virtual character, where the virtual character may succeed in overcoming based on a plurality of ratings of a plurality of virtual characteristics associated with the virtual character. One or More Computer Devices According to Various Embodiments FIG.5is a simplified diagram showing a computer device5000, according to various embodiments of the present disclosure. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. In some examples, the computer device5000includes a processing unit5002, a memory unit5004, an input unit5006, an output unit5008, and a communication unit5010. In various examples, the computer device5000is configured to be in communication with a user5100and/or a storage device5200. In certain examples, the system computer device5000is configured according to system100ofFIG.1, system300ofFIG.3, to implement method200ofFIG.2, and/or to implement method400ofFIG.4. Although the above has been shown using a selected group of components, there can be many alternatives, modifications, and variations. In some examples, some of the components may be expanded and/or combined. Some components may be removed. Other components may be inserted to those noted above. Depending upon the embodiment, the arrangement of components may be interchanged with others replaced. In various embodiments, the processing unit5002is configured for executing instructions, such as instructions to implement method200ofFIG.2and/or method400ofFIG.4. In some embodiments, executable instructions may be stored in the memory unit5004. In some examples, the processing unit5002includes one or more processing units (e.g., in a multi-core configuration). In certain examples, the processing unit5002includes and/or is communicatively coupled to one or more modules for implementing the systems and methods described in the present disclosure. In some examples, the processing unit5002is configured to execute instructions within one or more operating systems, such as UNIX, LINUX, Microsoft Windows®, etc. In certain examples, upon initiation of a computer-implemented method, one or more instructions is executed during initialization. In some examples, one or more operations is executed to perform one or more processes described herein. In certain examples, an operation may be general or specific to a particular programming language (e.g., C, C#, C++, Java, or other suitable programming languages, etc.). In various examples, the processing unit5002is configured to be operatively coupled to the storage device5200, such as via an on-board storage unit5012. In various embodiments, the memory unit5004includes a device allowing information, such as executable instructions and/or other data to be stored and retrieved. In some examples, memory unit5004includes one or more computer readable media. In some embodiments, stored in memory unit5004include computer readable instructions for providing a user interface, such as to the user5004, via the output unit5008. In some examples, a user interface includes a web browser and/or a client application. In various examples, a web browser enables one or more users, such as the user5004, to display and/or interact with media and/or other information embedded on a web page and/or a website. In certain examples, the memory unit5004include computer readable instructions for receiving and processing an input, such as from the user5004, via the input unit5006. In certain examples, the memory unit5004includes random access memory (RAM) such as dynamic RAM (DRAM) or static RAM (SRAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and/or non-volatile RAM (NVRAN). In various embodiments, the input unit5006is configured to receive input, such as from the user5004. In some examples, the input unit5006includes a keyboard, a pointing device, a mouse, a stylus, a touch sensitive panel (e.g., a touch pad or a touch screen), a gyroscope, an accelerometer, a position detector (e.g., a Global Positioning System), and/or an audio input device. In certain examples, the input unit5006, such as a touch screen of the input unit, is configured to function as both the input unit and the output unit. In various embodiments, the output unit5008includes a media output unit configured to present information to the user5004. In some embodiments, the output unit5008includes any component capable of conveying information to the user5004. In certain embodiments, the output unit5008includes an output adapter, such as a video adapter and/or an audio adapter. In various examples, the output unit5008, such as an output adapter of the output unit, is operatively coupled to the processing unit5002and/or operatively coupled to an presenting device configured to present the information to the user, such as via a visual display device (e.g., a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a cathode ray tube (CRT) display, an “electronic ink” display, a projected display, etc.) or an audio display device (e.g., a speaker arrangement or headphones). In various embodiments, the communication unit5010is configured to be communicatively coupled to a remote device. In some examples, the communication unit5010includes a wired network adapter, a wireless network adapter, a wireless data transceiver for use with a mobile phone network (e.g., Global System for Mobile communications (GSM), 3G, 4G, or Bluetooth), and/or other mobile data networks (e.g., Worldwide Interoperability for Microwave Access (WIMAX)). In certain examples, other types of short-range or long-range networks may be used. In some examples, the communication unit5010is configured to provide email integration for communicating data between a server and one or more clients. In various embodiments, the storage unit5012is configured to enable communication between the computer device5000, such as via the processing unit5002, and an external storage device5200. In some examples, the storage unit5012is a storage interface. In certain examples, the storage interface is any component capable of providing the processing unit5002with access to the storage device5200. In various examples, the storage unit5012includes an Advanced Technology Attachment (ATA) adapter, a Serial ATA (SATA) adapter, a Small Computer System Interface (SCSI) adapter, a RAID controller, a SAN adapter, a network adapter, and/or any other component capable of providing the processing unit5002with access to the storage device5200. In some examples, the storage device5200includes any computer-operated hardware suitable for storing and/or retrieving data. In certain examples, storage device5200is integrated in the computer device5000. In some examples, the storage device5200includes a database, such as a local database or a cloud database. In certain examples, the storage device5200includes one or more hard disk drives. In various examples, the storage device is external and is configured to be accessed by a plurality of server systems. In certain examples, the storage device includes multiple storage units such as hard disks or solid state disks in a redundant array of inexpensive disks (RAID) configuration. In some examples, the storage device5200includes a storage area network (SAN) and/or a network attached storage (NAS) system. One or More Computer Systems According to Various Embodiments FIG.6is a simplified computer system7000according to various embodiments of the present disclosure. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. In some examples, the system7000includes a vehicle system7002, a network7004, and a server7006. In certain examples, the system7000, the vehicle system7002, and/or the server7006is configured according to system100ofFIG.1, system300ofFIG.3, to implement method200ofFIG.2, and/or to implement method400ofFIG.4. Although the above has been shown using a selected group of components, there can be many alternatives, modifications, and variations. In some examples, some of the components may be expanded and/or combined. Some components may be removed. Other components may be inserted to those noted above. Depending upon the embodiment, the arrangement of components may be interchanged with others replaced. In various embodiments, the vehicle system7002includes a vehicle7010and a client device7012associated with the vehicle7010. In various examples, the client device7012is an on-board computer embedded or located in the vehicle7010. As an example, the client device7012is a mobile device (e.g., a smartphone) that is connected (e.g., via a wired connection or a wireless connection) to the vehicle7010. In some examples, the client device7012includes a processor7016(e.g., a central processing unit (CPU), and/or a graphics processing unit (GPU)), a memory7018(e.g., storage unit, random-access memory (RAM), and/or read-only memory (ROM), flash memory), a communications unit7020(e.g., a network transceiver), a display unit7022(e.g., a touchscreen), and one or more sensors7024(e.g., an accelerometer, a gyroscope, a magnetometer, and/or a GPS sensor). In various embodiments, the vehicle7010is operated by a user. In certain embodiments, the system7000includes multiple vehicles7010, each vehicle of the multiple vehicles operated by a respective user of multiple users. In various examples, the one or more sensors7024monitors, during one or more vehicle trips, the vehicle7010by at least collecting data associated with one or more operating parameters of the vehicle, such as speed, speeding, braking, location, engine status, and/or other suitable parameters. In certain examples, the collected data include vehicle telematics data. According to some embodiments, the data are collected continuously, at predetermined time intervals, and/or based on one or more triggering events (e.g., when a sensor has acquired measurements greater than a threshold amount of sensor measurements). In various examples, the data collected by the one or more sensors7024correspond to user driving data, which may correspond to a driver's driving behaviors, in the methods and/or systems of the present disclosure. According to various embodiments, the collected data are stored in the memory7018before being transmitted to the server7006using the communications unit7020via the network7004(e.g., via a local area network (LAN), a wide area network (WAN), or the Internet). In some examples, the collected data are transmitted directly to the server7006via the network7004. In certain examples, the collected data are transmitted to the server7006via a third party. In some examples, a data monitoring system, managed or operated by a third party, is configured to store data collected by the one or more sensors7024and to transmit such data to the server7006via the network7004or a different network. According to various embodiments, the server7006includes a processor7030(e.g., a microprocessor, a microcontroller), a memory7032(e.g., a storage unit), a communications unit7034(e.g., a network transceiver), and a data storage7036(e.g., one or more databases). In some examples, the server7006is a single server, while in certain embodiments, the server7006includes a plurality of servers with distributed processing and/or storage. In certain examples, the data storage7036is part of the server7006, such as coupled via a network (e.g., the network7004). In some examples, data, such as processed data and/or results, may be transmitted from the data storage, such as via the communications unit7034, the network7004, and/or the communications unit7020, to the client device7012, such as for display by the display7022. In some examples, the server7006includes various software applications stored in the memory7032and executable by the processor7030. In some examples, these software applications include specific programs, routines, and/or scripts for performing functions associated with the methods of the present disclosure. In certain examples, the software applications include general-purpose software applications for data processing, network communication, database management, web server operation, and/or other functions typically performed by a server. In various examples, the server7006is configured to receive, such as via the network7004and via the communications unit7034, the data collected by the one or more sensors7024from the client device7012, and stores the data in the data storage7036. In some examples, the server7006is further configured to process, via the processor7030, the data to perform one or more processes of the methods of the present disclosure. Examples of Computer Program Product According to Some Embodiments of the Present Disclosure FIGS.7A-34depict respective example interfaces associated with various functionalities described herein. In particular, the example interfaces relate to operation of a virtual vehicle within a virtual environment. In embodiments, a computing device (e.g., the computer device5000or computer system7000) may be configured to display the interfaces, where the computing device may be located within a vehicle and an operator of the vehicle may review the interfaces. It should be appreciated that the interfaces are merely exemplary, and that additional and alternative content is envisioned. In various examples, the example interface presents an in-game wallet corresponding to the user and/or a particular virtual character. FIG.7Adepicts an example interface associated with a virtual environment, the example interface including an autodrive mode selection overlay configured for a user (e.g., player) to select whether to send a virtual character onto one or more automatic virtual drives. In certain examples, during a virtual drive, the virtual character completes a virtual trip based on a plurality of scores of a plurality of virtual skills (e.g., braking, steering, speeding, focus). In some examples, when the user selects to disable the autodrive mode, the system sends the virtual character onto one or more manual virtual drives. In certain examples, during a manual drive, the virtual character completes a virtual trip based on user input during the virtual trip, such as via one or more interactive commands on an interactive interface (e.g., of a mobile device). FIG.7Bdepicts an example interface associated with a virtual environment, the example interface including a presentation of a trip success prediction or an obstacle avoidance success prediction. In some examples, during an autodrive, a virtual character's likelihood of success in overcoming an obstacle is shown, for example, as a percentage, and the likelihood of success being determined by a corresponding virtual skill. For example, the depicted obstacle is a steering obstacle, such as a pothole, and the virtual character's likelihood of success in avoiding the steering obstacle is determined, based at least in part upon the virtual character's virtual steering skill (e.g., a rating of 2), to be 60%. FIG.7Cdepicts an example interface associated with a virtual environment, the example interface including a virtual map consisting of various roadways, buildings, homes, landscape elements, and/or the like. On the virtual map, the example interface further includes a virtual route corresponding to a virtual trip, the virtual route including one or more virtual obstacles to be encountered by a virtual character should a user sends the virtual character onto the virtual trip. In the depicted example, the one or more virtual obstacles include two steering obstacles. In various examples, the example interface further includes a trip success prediction for a virtual character, such as one selected by a user. The trip success prediction being determined based on a plurality of virtual skills and/or characteristics. In the depicted example, the trip success prediction is high, as indicated by the displayed text of “Your driver has no problem with Hazards here.” In the depicted example, the virtual character has a virtual steering skill rating of 5, a virtual braking skill rating of 5, a virtual speeding braking skill rating of 5, and a virtual focus skill rating of 5. In the depicted example, the interface shows that the virtual character would travel the virtual trip with the autodrive mode activated. In various examples, the example interface presents a boost command configured to be selected by a user, which upon the user's selection, modifies, such as increases, the likelihood of success of the virtual character completing the virtual trip. In certain examples, the example interface presents a drive command configured to be selected by a user, which upon the user's selection, sends the virtual character onto the virtual trip. In various examples, the example interface presents a plurality of virtual characters, each selectable by a user, such as to be trained, to be sent onto a virtual trip, and to be played in the telematics-based game. FIG.8Adepicts an example interface associated with a virtual environment, the example interface including a presentation of a gift received by a user or by a virtual character. For example, a gift is a boost drink, which may be referred to as a “driver-ade drink,” configured to be used to increase a virtual character's one or more virtual skills, such as during one or more virtual trips, such as to improve a virtual character's likelihood of success in avoiding one or more virtual obstacles during the one or more virtual trips. In certain examples, a gift may be sent and received between friends, such as in-game friends of the telematics-based game. FIG.8Bdepicts an example interface associated with a virtual environment, the example interface including a presentation of an ongoing virtual trip, such as in a first virtual map, which may be of a bigger size and presented with a three-dimensional perspective, and/or in a second virtual map, which may be of a smaller size and presented with a linear road. In certain examples, the first virtual map of the example interface shows where the virtual character is in a game world, which may include multiple unlockable regions or zones. In certain examples, the second virtual map of the example interface shows a virtual vehicle associated with a virtual character and one or more virtual obstacles to be encountered by the virtual vehicle on a virtual trip. In the depicted example, the example interface shows that the autodrive is activated, indicating that the virtual vehicle will automatically maneuver itself upon encountering the one or more virtual obstacles. In some examples, the example interface shows a number of in-game items (e.g. donuts), such as ones to be sold by the virtual character in the telematics-based game, such as to earn in-game currency. In various examples, the example interface allows a user to enter or exit autodrive mode, such as during a virtual trip. FIG.9Adepicts an example interface associated with a virtual environment, the example interface including a presentation to instruct a user to record a real drive to unlock a virtual character and an associated virtual vehicle. In some examples, recording a virtual trip includes activating one or more sensors on a real vehicle operated by the user, such as to generate telematics data indicative of the user's performance during one or more real trips. FIG.9Bdepicts an example interface associated with a virtual environment, the example interface including a trip scoring presentation to teach a user how a rating of a real trip, such as one driven by the user, influences a daily score assignable and/or a reward grantable to the user or to a virtual character. For example, a rating may be bumpy driving, okay driving, smooth driving, great driving, great driving, or excellent driving. For example, a reward may be a rankpoint, such as one that may be accumulated by a user or a virtual character, such as for a daily ranking, weekly ranking, and/or monthly ranking. FIG.9Cdepicts an example interface associated with a virtual environment, the example interface including a presentation to instruct a user to allow location access, such as to always allow location access, for the system to save the user's real drives, such as even when the application is closed in a mobile device. FIG.10Adepicts an example interface associated with a virtual environment, the example interface including a presentation to notify that a user's device is not equipped with one or more sensors for one or more functionalities of the telematics-based game, yet the user may still play the game. FIG.10Bdepicts an example interface associated with a virtual environment, the example interface including a presentation to remind a user to record one or more real trips to earn in-game currency. FIG.10Cdepicts an example interface associated with a virtual environment, the example interface including a presentation to instruct a user to select a virtual character to be trained, such as to earn in-game experience. In some examples, the in-game experience is determined based on the user's performance in one or more real trips. FIG.11Adepicts an example interface associated with a virtual environment, the example interface including a trip scoring presentation to teach a user how a rating of a real trip, such as one driven by the user, influences a daily score assignable and/or a reward grantable to the user or to a virtual character. For example, a rating may be bumpy driving, okay driving, smooth driving, great driving, great driving, or excellent driving. For example, a reward may be a first in-game currency, which may be referred to as roadpoints, such as one that may be accumulated and/or used to purchase one or more in-game items of the telematics-based game. In some examples, the first in-game currency may only be earned via the user's real driving during one or more real trips. In some examples, said in-game items may only be purchased using the first in-game currency. FIG.11Bdepicts an example interface associated with a virtual environment, the example interface including a weekly rank presentation to show a user his/her current rank and/or rankpoint accumulation for the week. FIG.11Cdepicts an example interface associated with a virtual environment, the example interface including a presentation to notify that the user has access to daily scores and weekly rank associated with the telematics-based game. FIG.12Adepicts an example interface associated with a virtual environment, the example interface including a presentation showing a trip summary, such as a real trip summary, such as an unclaimed trip summary yet to be applied to a virtual character. In some examples, the trip summary includes experience gained for one or more skills, such as in-game experience (or skill points) earned for one or more virtual skills. In the depicted example, during a real trip and based at least in part upon a user's real driving during the real trip, 4900 skill points (e.g., skillpoints) or experience was gained for the virtual steering skill, and 7200 skill points (e.g., skillpoints) or experience was gained for the virtual braking skill. In the example interface, a reward associated with the real trip is further presented, which may include a level-up for a virtual character and/or roadpoints. In the example interface, one or more historic real trips and their associated rewards may be presented to the user. In the example interface, a user may select one or more completed real trips to apply the rewards to the user's game profile and/or to a character profile of a virtual character selected by the user. FIG.12Bdepicts an example interface associated with a virtual environment, the example interface including a presentation showing a skill level page of a virtual driver. In the depicted example, the driver level as well as the driving skills have all been maxed out at level 5, with the corresponding skill points (e.g., skillpoints) maxed out at5750for the virtual character's virtual steering skill, braking skill, speeding skill, and focus skill. FIG.12Cdepicts an example interface associated with a virtual environment, the example interface including a presentation showing a weekly summary for a user, the weekly summary indicating the quantity of real trips driven by the user in a given week, a daily score, roadpoints and/or rankpoints earned during each day, and a weekly rank. FIG.13Adepicts an example interface associated with a virtual environment, the example interface including a presentation of an ongoing virtual trip, such as in a first virtual map and in a second virtual map. In the depicted example, the example interface shows that manual drive is activated or that the autodrive is deactivated, indicating that the virtual vehicle will be controlled by a user's interaction with one or more selectable commands. In the depicted example, the virtual vehicle is approaching a crosswalk, which may be a braking obstacle, which upon encountering by the virtual vehicle, may be avoided upon a manual selection, by the user, of the braking command. FIG.13Bdepicts an example interface associated with a virtual environment, the example interface including a presentation of an ongoing virtual trip, such as in a first virtual map and in a second virtual map. In the depicted example, the example interface shows that manual drive is activated or that the autodrive is deactivated, indicating that the virtual vehicle will be controlled by a user's interaction with one or more selectable commands. In the depicted example, the virtual vehicle is approaching a red light, which may be a braking obstacle, which upon encountering by the virtual vehicle, may be avoided upon a manual selection, by the user, of the braking command. FIG.13Cdepicts an example interface associated with a virtual environment, the example interface including a presentation of an ongoing virtual trip, such as in a first virtual map and in a second virtual map. In the depicted example, the example interface shows that manual drive is activated or that the autodrive is deactivated, indicating that the virtual vehicle will be controlled by a user's interaction with one or more selectable commands. In the depicted example, the virtual vehicle is approaching a greenlight, which is not an obstacle, thus a manual input here to brake or steer would result in a deduction in trip performance. FIG.14Adepicts an example interface associated with a virtual environment, the example interface including a presentation of an ongoing virtual trip, such as in a first virtual map and in a second virtual map. In the depicted example, the example interface shows that manual drive is activated or that the autodrive is deactivated, indicating that the virtual vehicle will be controlled by a user's interaction with one or more selectable commands. In the depicted example, the virtual vehicle is approaching a puddle, which may be a steering obstacle, which upon encountering by the virtual vehicle, may be avoided upon a manual selection, by the user, of the steering command. FIG.14Bdepicts an example interface associated with a virtual environment, the example interface including a presentation of an ongoing virtual trip, such as in a first virtual map and in a second virtual map. In the depicted example, the example interface shows that manual drive is activated or that the autodrive is deactivated, indicating that the virtual vehicle will be controlled by a user's interaction with one or more selectable commands. In the depicted example, the virtual vehicle is approaching a pothole, which may be a steering obstacle, which upon encountering by the virtual vehicle, may be avoided upon a manual selection, by the user, of the steering command. FIG.14Cdepicts an example interface associated with a virtual environment, the example interface including a presentation of an ongoing virtual trip, such as in a first virtual map and in a second virtual map. In the depicted example, the example interface shows that manual drive is activated or that the autodrive is deactivated, indicating that the virtual vehicle will be controlled by a user's interaction with one or more selectable commands. In the depicted example, the virtual vehicle is approaching a car accident, which may be a steering obstacle, which upon encountering by the virtual vehicle, may be avoided upon a manual selection, by the user, of the steering command. FIGS.15A,15B, and15Cdepicts example interfaces associated with a virtual environment, the example interfaces including presentations of in-game items purchasable by a user in the telematics-based game. In the depicted examples, the in-game items are configured to facilitate in-game activities, such as to improve in-game currency earning rate. FIG.16Adepicts an example interface associated with a virtual environment, the example interface including a presentation of a vehicle condition of a virtual vehicle associated with a virtual character in the telematics-based game. In some examples, virtual vehicles may be damaged upon encountering one or more virtual obstacles. In certain examples, the degree of damage sustained by a virtual vehicle is at least dependent on an associated virtual character's one or more virtual skills (e.g., steering, braking, speeding, and/or focus). In various examples, the degree of damage sustained by a virtual vehicle is at least dependent on the difficulty of one or more virtual obstacles encountered by the virtual vehicle during one or more virtual trips. In some examples, interface presents a time remaining for a damaged virtual vehicle to be fully repaired, when it may again be sent on virtual trips. FIG.16Bdepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying a user that the more obstacles one fails during a virtual drive, the more time it would take for a virtual vehicle to be fully repaired due to the increased damage sustained. FIG.16Cdepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying a user that he/she may initiate virtual trips at various zones or regions of the game world to gain a variety of rewards. FIG.17Adepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying the user that the telematics-based game includes general events that are beneficial to any virtual vehicles of the game, as well as food events beneficial to only specific virtual vehicles of the game. FIG.17Bdepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying the user that during manual mode of the driving game, the user is to tap on a corresponding icon at the right time to succeed in controlling a virtual vehicle to overcome a corresponding obstacles. FIG.17Cdepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying the user that upon encountering a steering obstacle, tap on the steering icon to avoid the steering obstacle. FIG.18Adepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying the user that upon encountering a braking obstacle, tap on the braking icon to avoid the braking obstacle. FIG.18Bdepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying the user that tapping the correct skill-associated icon early will gain a better score than tapping later. FIG.18Cdepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying the user that upon encountering a focus obstacle, tap on the focus icon to avoid the focus obstacle. FIG.19Adepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying the user that upon encountering a speeding obstacle, tap on the speeding icon to avoid the speeding obstacle. FIG.19Bdepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying the user that the user may upgrade appearance and/or bonus-gaining items or features. FIG.19Cdepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying the user that the virtual vehicle still need to be parked. FIG.20Adepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying the user that one or more quests have been completed and new quests will be generated at the start of the day. FIG.20Bdepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying the user that a virtual character has leveled up a virtual skill (e.g., virtual steering skill, virtual braking skill, virtual focus skill, or virtual speeding skill), indicating that the virtual character has become more capable in avoiding an associated virtual obstacle. FIG.20Cdepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying the user that in a tapping game, the user may tap faster to gain points faster. FIG.21Adepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying the user that in a tapping game, the user may tap at a specific region to gain points. FIG.21Bdepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation notifying the user to record real life driving to gain roadpoints based on trip scores. FIG.22depicts an example interface associated with a virtual environment, the example interface including a real driving leaderboard, such as a weekly real driving leaderboard, such as one displaying the user's current rank against one or more other players of the telematics-based game. As depicted, the ranking is based on daily score earned. FIG.23Adepicts an example interface associated with a virtual environment, the example interface including a virtual map consisting of various roadways, buildings, homes, landscape elements, and/or the like. On the virtual map, the example interface further includes a virtual route corresponding to a virtual trip, the virtual route including one or more virtual obstacles to be encountered by a virtual character should a user sends the virtual character onto the virtual trip. In the depicted example, the one or more virtual obstacles include a focus obstacle, a speeding obstacle, a braking obstacle, and a steering obstacle. In various examples, the example interface further includes a trip difficulty level of the virtual trip, which may be determined based on the one or more obstacles of the virtual trip and/or the virtual character's one or more virtual skills. In the depicted example, the trip difficulty level is 5, and the virtual character has a virtual steering skill rating of 1, a virtual braking skill rating of 3, a virtual speeding braking skill rating of 5, and a virtual focus skill rating of 1. In the depicted example, the interface shows that the virtual character would travel the virtual trip with the autodrive mode deactivated. In various examples, the example interface presents a boost command configured to be selected by a user, which upon the user's selection, modifies, such as increases, the likelihood of success of the virtual character completing the virtual trip. In certain examples, the example interface presents a drive command configured to be selected by a user, which upon the user's selection, sends the virtual character onto the virtual trip. In various examples, the example interface presents a plurality of virtual characters, each selectable by a user, such as to be trained, to be sent onto a virtual trip, and to be played in the telematics-based game. FIG.23Bdepicts an example interface associated with a virtual environment, the example interface including a quick repair icon configured to be selected by the user to immediately finish a virtual vehicle repair. In some examples, a user may spend a certain amount of roadpoints to immediately repair a virtual vehicle. FIG.24Adepicts an example interface associated with a virtual environment, the example interface including a trip summary of a real trip driven by a user. As depicted, the interface presents a real map indicating the route taken by the user in the real trip. As depicted, the interface presents an overall trip score and a plurality of scores associated with a plurality of real skills (e.g., steering, braking, speeding, focus). As depicted, the interface presents a trip rating, which as indicated in the example, is “smooth driving.” FIG.24Bdepicts an example interface associated with a virtual environment, the example interface including a real driving leaderboard, such as a weekly real driving leaderboard, such as one displaying the user's current rank against one or more other players of the telematics-based game. As depicted, the ranking is based on daily score earned. FIG.24Cdepicts an example interface associated with a virtual environment, the example interface including a real map indicating the route taken by the user in the real trip. As depicted, the real map includes one or more real obstacles encountered by the user during the real trip. In the depicted example, the user encountered three real speeding obstacles and three real focus obstacles, on the real trip. FIG.25Adepicts an example interface associated with a virtual environment, the example interface including a rank history notifying a user's historic real-world driving performances, such as weekly performances. FIG.25Bdepicts an example interface associated with a virtual environment, the example interface including a presentation to remind a user to record one or more real trips to earn in-game currency. FIG.25Cdepicts an example interface associated with a virtual environment, the example interface including a trip scoring presentation to teach a user how a rating of a real trip, such as one driven by the user, influences a daily score assignable and/or a reward grantable to the user or to a virtual character. For example, a rating may be bumpy driving, okay driving, smooth driving, great driving, great driving, or excellent driving. For example, a reward may be a first in-game currency, which may be referred to as roadpoints, such as one that may be accumulated and/or used to purchase one or more in-game items of the telematics-based game. In some examples, the first in-game currency may only be earned via the user's real driving during one or more real trips. In some examples, said in-game items may only be purchased using the first in-game currency. FIGS.26A and26Bdepict example interfaces associated with a virtual environment, the example interfaces displaying a character selection menu configured to present a plurality of selectable virtual characters and their plurality of virtual skill levels. FIG.27Adepicts an example interface associated with a virtual environment, the example interface including a weekly performance of a user or a virtual character. In the depicted example, the weekly performance includes skill ratings of a plurality of real skills. FIG.27Bdepicts an example interface associated with a virtual environment, the example interface including a trip scoring presentation to teach a user how a rating of a real trip, such as one driven by the user, influences a daily score assignable and/or a reward grantable to the user or to a virtual character. For example, a rating may be bumpy driving, okay driving, smooth driving, great driving, great driving, or excellent driving. For example, a reward may be a rankpoint, such as one that may be accumulated by a user or a virtual character, such as for a daily ranking, weekly ranking, and/or monthly ranking. FIG.27Cdepicts an example interface associated with a virtual environment, the example interface including a weekly rank presentation to show a user his/her current rank and/or rankpoint accumulation for the week. FIG.28Adepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation to notify a user that the game may record and rate real driving of the user to help identify whether the user is a defensive driver. FIG.28Bdepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation to notify a user that roadpoints may be used in-game to perform certain tasks. FIG.28Cdepicts an example interface associated with a virtual environment, the example interface including an explanatory presentation to notify a user that the user may select a virtual character from a plurality of virtual characters to train, such as by operating a real vehicle with good habits during a real trip. FIG.29depicts an example interface associated with a virtual environment, the example interface including a trip summary of a real trip driven by a user. As depicted, the interface presents a real map indicating the route taken by the user in the real trip. As depicted, the interface presents an overall trip score and a plurality of scores associated with a plurality of real skills (e.g., steering, braking, speeding, focus). As depicted, the interface presents a trip rating, which as indicated in the example, is “great driving.” As depicted, the interface further presents rewards earned during the real trip, such as a trainee level-up and roadpoints. FIG.30depicts an example interface associated with a virtual environment, the example interface including a presentation showing experience gained for one or more skills, such as in-game experience (or skill points) earned for one or more virtual skills. In the depicted example, 4500 skill points (e.g., skillpoints) were gained for the virtual steering skill, 4500 skill points (e.g., skillpoints) were gained for the virtual braking skill, 4500 skill points (e.g., skillpoints) were gained for the virtual speeding skill, and 4000 skill points (e.g., skillpoints) were gained for the virtual focus skill. FIG.31depicts an example interface associated with a virtual environment, the example interface including a rank history notifying a user's historic real-world driving performances, such as weekly performances. FIG.32depicts an example interface associated with a virtual environment, the example interface including a presentation showing a skill level page of a virtual driver. In the depicted example, the virtual driver is at level one, and has a virtual steering skill level of 3, a virtual braking skill level of 3, a virtual speeding skill level of 2, and a virtual focus level of 2. FIG.33depicts an example interface associated with a virtual environment, the example interface including a trip summary of a real trip driven by a user. As depicted, the interface presents a real map indicating the route taken by the user in the real trip. As depicted, the interface presents an overall trip score and a plurality of scores associated with a plurality of real skills (e.g., steering, braking, speeding, focus). As depicted, the interface presents a trip rating, which as indicated in the example, is “great driving.” As depicted, the interface further presents rewards earned during the real trip, such as a trainee level-up and roadpoints. FIG.34depicts an example interface associated with a virtual environment, the example interface including a presentation showing a weekly summary for a user, the weekly summary indicating the quantity of real trips driven by the user in a given week, a daily score, roadpoints and/or rankpoints earned during each day, and a weekly rank. Examples of Certain Embodiments of the Present Disclosure Certain embodiments of the present disclosure are directed to telematics data processing. More particularly, some embodiments of the disclosure provide methods and systems for training a virtual operator based at least in part upon a real-world vehicle operator. Merely by way of example, some embodiments of the disclosure includes connecting one or more real-world driving behaviors of a real-world vehicle operator to one or more driving behavior of a virtual operator in a telematics-based game, but it would be recognized that the disclosure has a much broader range of applicability. In certain embodiments, systems and/or methods of the present disclosure provide entertainment to a user, wherein the entertainment is generated based at least in part upon telematics data associated with the user. In some examples, the entertainment is a telematics-based game playable by the user. In certain embodiments, systems and/or methods of the present disclosure provide one or more indicators associated with the driving behavior of a driver based at least in part upon telematics data associated with the driver. In some examples, systems and/or methods of the present disclosure modify an insurance policy of the driver based at least in part upon the driving behavior, such as through recurring automatic policy-updates. In some examples, such automatic policy-updates act as an incentive for the driver to improve their driving behavior. In certain embodiments, systems and/or methods of the present disclosure provide a calibration session for calibrating one or more base scores corresponding to one or more driving characteristics (e.g., speeding, braking, steering, focus). In some examples, systems and methods provide a plurality of driving sessions, each driving session of the plurality of driving sessions corresponds to a set of driving scores, and each set of driving scores corresponds to a set of driving characteristics (e.g., speeding, braking, steering, focus). In certain embodiments, systems and/or methods of the present disclosure provide a game, which may be called Foodtruck Fury or Food Truck Fury, that allows Google sign-up and/or Facebook sign-up. In some examples, the game provided is a tapper genre game, such as one including buttons for steering, braking, accelerating, and/or focusing. In some examples, the game provided includes a Food Truck Park where one or more food trucks may operate. In some examples, each food truck is associated with one virtual driver. In some examples, each virtual driver can level up, such as via in-game interactions and/or based on one or more driving behaviors of an associated real-world driver. In certain embodiments, systems and/or methods of the present disclosure provide a game including a game map having a plurality of regions, some of which may be unlockable, such as being inaccessible by a food truck until it is unlocked. In certain embodiments, systems and/or methods of the present disclosure provide a game including a food truck having a plurality of food items, some of which may be unlockable, such as being unavailable for sale until it is unlocked. In certain embodiments, systems and/or methods of the present disclosure provide a game including one or more story-based missions, game controls tutorials, zombie-based story. In certain embodiments, systems and/or methods of the present disclosure provide a game including roadpoints (or other currencies under a different name) earnable by a user and/or a driver. For example, roadpoints may be earned by a driver via driving in the real world. In some examples, the amount of roadpoints earned by a driver corresponds to the driving behavior of the driver during the real-world drive. For example, the better the driver drives in the real world, the greater the amount of roadpoints the driver is awarded. In certain examples, the roadpoints are a type of hard currency, such as one that can only be earned via real-world driving, such as cannot be earned by purchasing with real-world currencies (e.g., United States Dollars). In some examples, the roadpoints may be used to upgrade a virtual driver, a food truck, and/or a food park, and/or to purchase items, equipment, gifts, and/or recipes. In certain examples, systems and methods of the present disclosure provide a game including a soft currency (e.g., regular points/dollars), which may be used to upgrade a virtual driver, a food truck, and/or a food park, and/or to purchase items, equipment, gifts, and/or recipes. In some examples, one or more types of purchases or upgrades available via the use of hard currency is not available via the use of soft currency. In certain embodiments, systems and/or methods of the present disclosure provide a game including receiving telematics data associated with a driver/player/user, such as data collected using GPS, accelerometer, and/or gyroscope. In some examples, systems and/or methods of the present disclosure provide a game including granting roadpoints based at least in part upon the received telematics data. In certain embodiments, systems and/or methods of the present disclosure provide a game including a virtual driver trainable (e.g., having levels for leveling up, corresponding to one or more virtual driver's driving characteristics) by a real-world driver, such as based at least in part upon telematics data associated with the real-world driver. In some examples, systems and/or methods of the present disclosure provide a game including a plurality of virtual drivers (e.g., each virtual driver corresponding to a food truck of a plurality of food trucks) trainable by a real-world driver, such as one at a time. For example, a user/driver/player may select one virtual driver from the plurality of virtual drivers as a trainee, such as one who gains experience based at least in part upon one or more driving behaviors of the real-world driver. In some examples, the same trainee can gain experience and/or level up through one or more driving trips driven by the real-world driver, such as until the user/player/driver selects another virtual driver as a new trainee. In some examples, one or more virtual drivers may have driving scores (e.g., corresponding to driving characteristics) different from that of the real-world driver, such as owing to the one-on-one training mechanism. In various examples, a virtual driver levels up, such as when selected as a trainee, faster if the real-world drive showed better driving behavior (e.g., having higher driving scores corresponding to one or more driving characteristics). In some examples, each driving trip of the real-world driver may be graded, such as Excellent, Great, Fair, or Bumpy, which may influence how the virtual driver levels. In certain embodiments, systems and/or methods of the present disclosure provide a game including mini games that a player may play to level up the virtual drivers. In certain embodiments, systems and/or methods of the present disclosure provide a game including a manual-drive mode, which when activated, the in-game food truck driven by the virtual driver is controlled by the player. In some examples, systems and methods of the present disclosure provide a game including an auto-drive mode, which when activated (e.g., by a player), would send an in-game food truck driven by the virtual driver on autopilot. For example, under auto-drive mode, a food truck may automatically drive to a destination without a player's interaction and/or automatically deliver food items at one or more destinations. In some examples, under auto-drive mode, a virtual driver may be sent on one or more tasks/missions/challenges, which the success rate/chance of completing each task of the one or more tasks corresponds to at least the levels of the driver characteristics of the virtual driver. In various examples, a player may control a plurality of food trucks simultaneously (e.g., spinning gameplay), such as by managing the tasks executable by the virtual drivers. In some examples, one or more zones of the map are harder zones having tasks of higher levels, which may correspond to the need for a virtual driver to have higher levels in order to have a high success rate in auto-drive mode. In certain embodiments, systems and/or methods of the present disclosure provide a game for a player to play during a first time period, drive during a second time period (e.g., without playing the game), then claim reward in a third time period based at least in part upon the driving performed in the second time period. In some examples, the game limits rewardable driving trips to a specific number (e.g., three) per day. In certain embodiments, systems and/or methods of the present disclosure provide a game for incentivizing a driver, as a player of the game, to drive better, such as for incentivizing improving in the driving characteristics of steering, braking, speeding, and/or focus. In certain embodiments, systems and/or methods of the present disclosure provide a game including social network support, such as one allowing incorporation of one or more friend lists. In some examples, gifts may be sent between players, such as to friends, such as by sending a food truck to deliver said gifts. In some examples, the gift may depend on the rating of a virtual driver. In certain examples, each player is ranked weekly. In certain embodiments, systems and/or methods of the present disclosure provide a game including a game map that is shared by various players, such as at different instances. In some examples, the game map is a ghost map of a real-world map. For example, as a ghost map, the real-world driving of a driver who is a player of the game, would travel in the virtual map according to the real-world driving. In certain embodiments, systems and/or methods of the present disclosure provide a game including collaborative modes, such as a collaborative assault mode for multiple player to assault a city together. In certain embodiments, systems and/or methods of the present disclosure provide a game with large player base, such as one imposed with an age limit and/or without a verification of insurance policy. In certain embodiments, systems and/or methods of the present disclosure provide a game with driving score and/or level tracking over time. In certain embodiments, systems and/or methods of the present disclosure provide a game where the actual driving of a real-world driver interacts with the virtual driving of the virtual driver in the game. Examples of Systems According to Some Embodiments of the Present Disclosure The present embodiments may relate to, inter alia, facilitating virtual operation of virtual vehicles within a virtual environment based on real-world vehicle operation data. The present embodiments may further relate to presenting the virtual operation of the virtual vehicles in a user interface for review by real-life operators of real-life vehicles. According to certain aspects, systems and methods may generate a data model representative of real-life operation of a real-life vehicle by a real-life operator, where the data model may include various performance characteristics and metrics. Additionally, the data model may indicate certain real-life routes, roadways, or the like on which the real-life vehicle has operated, along with the frequency of such operation. The systems and methods may access the data model and, based on the data model, may determine operation of a virtual vehicle within a virtual environment, where the operation may include a set of virtual movements or maneuvers for the virtual vehicle to undertake within the virtual environment. Additionally, the systems and methods may display, in a user interface, a visual representation of the virtual operation of the virtual vehicle for review by the real-life operator. The systems and methods may periodically or continuously update the virtual operation based on updated real-life vehicle operation data. In some scenarios, the real-life operator may recognize certain limitations and areas for improvement in the virtual operation of the virtual vehicle. Because the virtual operation of the virtual vehicle is based on the real-life operation of the real-life vehicle, the real-life operator may be motivated to modify or adjust his/her real-life vehicle operation in order to correct or address the limitations and areas for improvement identified in the virtual operation of the virtual vehicle. For example, the real-life operator may ascertain that he/she travels too fast on a work commute, and may make efforts to reduce his/her speed. The systems and methods therefore offer numerous benefits. In particular, by incorporating virtual vehicle operation that corresponds to real-world vehicle operation, the systems and methods may effectively penetrate psychological barriers that vehicle operators possess in decreasing the perceived low risks associated with vehicle operation. Accordingly, vehicular safety may improve, thereby increasing the safety of vehicle operators and those otherwise affected by vehicle operation. The embodiments as discussed herein describe virtual vehicle operation and real-life vehicle operation. It should be appreciated that the term “virtual” describes simulated features, components, individuals, and the like, that do not physically exist, have not physically occurred, or are not physically occurring in the real-world environment, but is rather made by software and hardware components to appear to physically exist. Further, it should be appreciated that the term “real-life” or “real-world” (or, in some cases, components without mention of the term “virtual”), in contrast, describes actual features, components, individuals, and the like, that do physically exist, have physically occurred, or are physically occurring in the real-world environment. In some embodiments, the virtual vehicle operation may be at least partially embodied in augmented reality, wherein virtual display data may be overlaid on real-world image data. For example, a vehicle may be, an automobile, car, truck, tow truck, snowplow, boat, motorcycle, motorbike, scooter, recreational vehicle, or any other type of vehicle capable of roadway or water travel. According to some examples, the vehicle may be capable of operation by a vehicle operator, and may be capable of at least partial (or total) autonomous operation by a computer via the collection and analysis of various sensor data. In various embodiments, a system or of the present disclosure may be permanently or removably installed in a vehicle, and may generally be an on-board computing device capable of performing various functionalities relating to analyzing vehicle operation data and facilitating virtual vehicle operation (and, in some cases, at least partial autonomous vehicle operation). Thus, the system may be particularly configured with particular elements to thereby be able to perform functions relating to these functionalities. Further, the computer may be installed by the manufacturer of the vehicle, or as an aftermarket modification or addition to the vehicle. In various embodiments, a system of the present disclosure may include an electronic device that may be associated with a vehicle, where the electronic device may be any type of electronic device such as a mobile device (e.g., a smartphone), notebook computer, tablet, phablet, GPS (Global Positioning System) or GPS-enabled device, smart watch, smart glasses, smart bracelet, wearable electronic, PDA (personal digital assistants), pager, computing device configured for wireless communication, and/or the like. The electronic device may include a location module (e.g., a GPS chip), an image sensor, an accelerometer, a clock, a gyroscope, a compass, a yaw rate sensor, a tilt sensor, and/or other sensors. In some examples, an electronic device may belong to or be otherwise associated with an individual, where the individual may be an operator of the vehicle or otherwise associated with the vehicle. For example, the individual may own the vehicle, may rent the vehicle for a variable or allotted time period, or may operate vehicle as part of a ride share. According to embodiments, the individual may carry or otherwise have possession of the electronic device during operation of the vehicle. In various embodiments, a computer may operate in conjunction with an electronic device to perform any or all of the functions described herein as being performed by the vehicle. In other embodiments, the computer may perform all of the functionalities described herein, in which case the electronic device may not be present or may not be connected to the computer. In still other embodiments, the electronic device may perform all of the functionalities described herein. Still further, in some embodiments, the computer and/or the electronic device may perform any or all of the functions described herein in conjunction with one or more of the back-end components. For example, in some embodiments or under certain conditions, the electronic device and/or the computer may function as client devices that outsource some or most of the processing to one or more of the back-end components. In various examples, a computer and/or an electronic device may communicatively interface with one or more on-board sensors that are disposed on or within a vehicle and that may be utilized to monitor the vehicle and the environment in which the vehicle is operating. In particular, the one or more on-board sensors may sense conditions associated with the vehicle and/or associated with the environment in which the vehicle is operating, and may generate sensor data indicative of the sensed conditions. For example, the sensor data may include a location and/or operation data indicative of operation of the vehicle. In some configurations, at least some of the on-board sensors may be fixedly disposed at various locations on the vehicle. Additionally or alternatively, at least some of the on-board sensors may be incorporated within or connected to the computer. Still additionally or alternatively, in some configurations, at least some of the on-board sensors may be included on or within the electronic device. In some examples, the on-board sensors may communicate respective sensor data to the computer and/or to the electronic device, and the sensor data may be processed using the computer and/or the electronic device to determine when the vehicle is in operation as well as determine information regarding operation of the vehicle. In some situations, the on-board sensors may communicate respective sensor data indicative of the environment in which the vehicle is operating. According to embodiments, the sensors may include one or more of a GPS unit, a radar unit, a LIDAR unit, an ultrasonic sensor, an infrared sensor, some other type of electromagnetic energy sensor, a microphone, a radio (e.g., to support wireless emergency alerts or an emergency alert system), an inductance sensor, a camera, an accelerometer, an odometer, a system clock, a gyroscope, a compass, a geo-location or geo-positioning unit, a location tracking sensor, a proximity sensor, a tachometer, a speedometer, and/or the like. Some of the on-board sensors (e.g., GPS, accelerometer, or tachometer units) may provide sensor data indicative of, for example, the vehicle's location, speed, position speeding, direction, responsiveness to controls, movement, etc. Other sensors may be directed to the interior or passenger compartment of the vehicle, such as cameras, microphones, pressure sensors, weight sensors, thermometers, or similar sensors to monitor any passengers, operations of instruments included in the vehicle, operational behaviors of the vehicle, and/or conditions within the vehicle. For example, on-board sensors directed to the interior of the vehicle may provide sensor data indicative of, for example, in-cabin temperatures, in-cabin noise levels, data from seat sensors (e.g., indicative of whether or not an individual is using a seat, and thus the number of passengers being transported by the vehicle), data from seat belt sensors, data regarding the operations of user controlled devices such as windshield wipers, defrosters, traction control, mirror adjustment, interactions with on-board user interfaces, etc. Additionally, the on-board sensors may further detect and monitor the health of the occupant(s) of the vehicle (e.g., blood pressure, heart rate, blood sugar, temperature, etc.). Moreover, the on-board sensors may additionally detect various criminal acts, including auto thefts, car jackings, and/or the like. In these scenarios, the vehicle may initiate communications to relevant responders (e.g., a police station) of the detected act(s). Some of the sensors disposed at the vehicle (e.g., radar, LIDAR, camera, or other types of units that operate by using electromagnetic energy) may actively or passively scan the environment external to the vehicle for obstacles (e.g., emergency vehicles, other vehicles, buildings, pedestrians, trees, gates, barriers, animals, etc.) and their movement, weather conditions (e.g., precipitation, wind, visibility, or temperature), roadways, road conditions (e.g., lane markings, potholes, road material, traction, or slope), road topography, traffic conditions (e.g., traffic density, traffic congestion, etc.), signs or signals (e.g., traffic signals, speed limits, other jurisdictional signage, construction signs, building signs or numbers, or control gates), and/or other information indicative of the environment of the vehicle. Information or data that is generated or received by the on-board sensors may be communicated to the computer and/or to the electronic device. In some embodiments, systems of the present disclosure may include or be communicatively connected to one or more data storage devices or entities, which may be adapted to store data related to the operation of the vehicle, the environment and context in which the vehicle is operating, and/or other information. For example, the one or more data storage devices may be implemented as a data bank or a cloud data storage system, at least a portion of which may be locally accessed by systems of the present disclosure using a local access mechanism such as a function call or database access mechanism, and/or at least a portion of which may be remotely accessed by the systems of the present disclosure using a remote access mechanism such as a communication protocol. The systems of the present disclosure may access data stored in the one or more data storage devices when executing various functions and tasks associated with the present disclosure. In various embodiments, systems of the present disclosure may further include a set of third-party sources, which may be any system, entity, repository, or the like, capable of obtaining and storing data that may be indicative of situations and circumstances associated with vehicle operation, or data associated with the operator of a vehicle. For example, one of the third-party sources may be a social network provider storing a set of contacts or connections associated with the operator of the vehicle. In some examples, the set of third-party sources may be included as part of the one or more data storage devices. In embodiments, the third-party source(s) may store data indicative of vehicle operation regulations. For example, the third-party source may store speed limit information, direction of travel information, lane information, and/or similar information. The third-party source(s) may also maintain or obtain real-time data indicative of traffic signals for roadways (e.g., which traffic signals currently have red lights or green lights). It should be appreciated that the one or more data storage devices or entities may additionally or alternatively store the data indicative of vehicle operation regulations. In some embodiments, systems of the present disclosure includes a communication component configured to transmit information to and receive information from other external sources, such as emergency vehicles, other vehicles and/or infrastructure or environmental components disposed within the environment of the vehicle. The communication component may include one or more wireless transmitters or transceivers operating at any desired or suitable frequency or frequencies. In some embodiments, the systems of the present disclosure may include one or more environmental communication components or devices that may be used for monitoring the status of one or more system components and/or for receiving data generated by other sensors that may be associated with, or may detect or be detected by, the vehicle and disposed at locations that are off-board the vehicle. As generally referred to herein, with respect to a vehicle, “off-board sensors” or “environmental sensors” are sensors that are not transported by the vehicle. The data collected by the off-board sensors is generally referred to herein as “sensor data,” “off-board sensor data,” or “environmental sensor data” with respect to the vehicle. At least some of the off-board sensors may be disposed on or at the one or more infrastructure components or other types of components that are fixedly disposed within the environment in which a vehicle is traveling. In some examples, infrastructure components may include roadways, bridges, traffic signals, gates, switches, crossings, parking lots or garages, toll booths, docks, hangars, or other similar physical portions of a transportation system's infrastructure, for example. Other types of infrastructure components at which off-board sensors may be disposed may include a traffic light, a street sign, a railroad crossing signal, a construction notification sign, a roadside display configured to display messages, a billboard display, a parking garage monitoring device, etc. Off-board sensors that are disposed on or near infrastructure components may generate data relating to the presence and location of obstacles or of the infrastructure component itself, weather conditions, traffic conditions, operating status of the infrastructure component, and/or behaviors of various vehicles, pedestrians, and/or other moving objects within the vicinity of the infrastructure component, for example. In some embodiments, one or more environmental communication devices may be communicatively connected (either directly or indirectly) to one or more off-board sensors, and thereby may receive information relating to the condition and/or location of the infrastructure components, of the environment surrounding the infrastructure components, and/or of the other vehicle(s) or objects within the environment of the vehicle. In some examples, the one or more environmental communication devices may receive and/or transmit information from the vehicle. According to some embodiments, a computer and/or an electronic device may retrieve or otherwise access data from any combination of the sensors where the data is generated during real-world operation of the vehicle by the operator. The computer and/or the electronic device may generate a data model that is representative of the real-world operation of the vehicle by the operator, where the data model may include data related to performance characteristics associated with the real-world operation. Additionally, the computer and/or the electronic device may facilitate virtual operation of a virtual vehicle by a virtual operator within a virtual environment. In particular, the virtual operation may be based on the data model representative of the real-world operation of the vehicle. According to embodiments, either or both of the computer and the electronic device may be configured with a user interface configured to present or display content. The computer and/or the electronic device may cause the user interface(s) to display or present the virtual environment, and depict the virtual operation of the virtual vehicle by the virtual operator within the virtual environment. Additionally, the user interface(s) may present statistics, data, and other information associated with the virtual operation of the virtual vehicle for review by the operator of the vehicle. In various embodiments, a system of the present disclosure includes a memory, a set of sensors, a processor, a user interface, and a server (such as a server associated with a remote computing system). According to some examples, the processor and the user interface may be embodied within an electronic device associated with a vehicle and the set or sensors may be disposed on, throughout, or within various portions of the vehicle. In various embodiments, a system of the present disclosure is configured to generate, via the set of sensors, a set of vehicle operation data that reflects operation of the vehicle by the operator. In some examples, the set of sensors may generate the set of vehicle operation data continuously or over the course or one or more time periods. In some examples, the set of sensors may provide the set of vehicle operation data to the processor, such as in real-time or near-real-time as the set of sensors generates the set of vehicle operation data. In various embodiments, the processor may generate a data model based at least in part upon a portion of the set of vehicle operation data, where the data model may generally represent operation of the vehicle by the operator. In an embodiment, the data model may reflect the following vehicle operation or performance characteristics: speeding, braking, and/or steering, where each characteristic may have a relative performance associated therewith. For example, each vehicle operation characteristic in the data model may have a number rating on a scale from one (1) to ten (10). In various embodiments, the processor may generate the data model according to various data analysis techniques, calculations, algorithms, and/or the like. Generally, the data analysis techniques process and analyze the raw sensor data and generate a set of information (e.g., structured information) from which vehicle operation metrics may be identified or determined. For example, the processor may process raw angular and linear speeding data, and may generate, for the data model, metrics corresponding to the speeding, braking, and steering performance of the vehicle operator. After generating the data model, the processor may provide the data model to the memory. Subsequently, the memory may store the data model. In some embodiments, the processor may initiate a virtual trip, such as in response to a selection by a user (e.g., the operator of the vehicle), in response to an occurrence of a condition, or automatically at a certain time. According to embodiments, the virtual trip may have an associated virtual vehicle that is operated by a virtual operator within a virtual environment. In association with initiating the virtual trip, the processor may cause the user interface to display certain visual content associated with the virtual trip. For example, the user interface may display an indication of the virtual vehicle on a virtual map, and/or other content. Certain aspects of the virtual trip may be selectable or configurable by the operator, such as via the user interface, as further discussed herein. For example, the operator may select different virtual operators to “train” or accumulate statistics using the data model. In some embodiments, prior to, after, or concurrently with initiating the virtual trip, the processor may retrieve the data model from the memory. In an optional implementation, the processor may additionally retrieve additional data (e.g., social networking data) from the server. According to embodiments, the social network data may be based on one or more contacts of the operator, where the one or more contacts may have one or more associated additional virtual vehicles with one or more additional virtual vehicle operators. Virtual operation of the one or more additional virtual vehicles may be based on one or more additional data models associated with real-life vehicle operation by the one or more contacts of the operator. According to embodiments, the virtual trip associated with the virtual operator may reflect at least some of the virtual operation of the one or more additional virtual vehicles, as further discussed herein. In various embodiments, after retrieving the data model, the processor may determine, based on at least part of the data model, a set of virtual vehicle movements for the virtual vehicle. Generally, the set of virtual vehicle movements may reflect the vehicle operation characteristics included in the data model, where the relative performance level(s) of the set of virtual vehicle movements may correspond to the relative performance level(s) of the vehicle operation characteristics. For example, if the data model reflects that the operator has a score of 8.5 out of 10.0 in the speeding characteristic in real-life vehicle operation, the corresponding virtual vehicle operator may also have a score of 8.5 out of 10.0 in a virtual speeding characteristic, for which the set of virtual vehicle movements may account (i.e., the speeding of the virtual vehicle is very good). In an additional example, if the data model reflects that the operator has a score of 3.0 out of 10.0 in the steering characteristic in real-life vehicle operation, the corresponding virtual vehicle operator may also have a score of 3.0 out of 10.0 in a virtual steering characteristic, for which the set of virtual vehicle movements may account (i.e., the steering of the virtual vehicle is not good). According to some embodiments, the set of virtual vehicle movements may be associated with one or more vignettes or scenes that may be incorporated into or associated with the virtual environment. Generally, a vignette may be a virtual recreation of an encounter or driving event that may occur in real life. For example, a vignette may be a virtual vehicle's interaction with a pedestrian walkway (i.e., the approach to, stopping at, and speeding from the pedestrian walkway); another vignette may be a virtual vehicle's approach to and right-hand turn through a red light; and another vignette may be a virtual vehicle's switching lanes in traffic. It should be appreciated that additional vignettes are envisioned. In some embodiments, the processor may determine a set of virtual vehicle movements in associated with a given vignette based on a relevant portion of the data model. For example, for a pedestrian crosswalk vignette, if the data model indicates that the operator is prone to sudden stopping, a virtual vehicle movement may be a sudden stop by the virtual vehicle upon approach to the pedestrian crosswalk. As another example, for a right-hand turn through a red light vignette, if the data model indicates that the operator comes to a full stop at red lights prior to a right-hand turn, a virtual vehicle movement may similarly be a full stop by the virtual vehicle upon approach to the red light prior to turning right. According to alternative or additional embodiments, the set of virtual vehicle movements may be associated with a game or challenge that may be incorporated into or associated with the virtual environment. Generally, a game may have a set of goals or challenges to be carried out by a virtual vehicle within the virtual environment. For example, a game may be a simulated delivery of one or more products or goods from a first virtual location to a second virtual location; and another game may be a ride share simulation facilitated by the virtual vehicle from a first virtual location to a second virtual location. It should be appreciated that additional games are envisioned. In some embodiments, the processor may determine a set of virtual vehicle movements associated with a given game based on a relevant portion of the data model. For example, for a delivery game, if the data model indicates that the operator is prone to sudden speeding, a virtual vehicle movement may be a sudden speeding by the virtual vehicle upon initiating a delivery from a first location. As another example, for a ride sharing simulation with the virtual vehicle transporting a virtual passenger, if the data model indicates that the operator is prone to sudden stops, a virtual vehicle movement may be a sudden stop by the virtual vehicle approaching a stop sign. In certain embodiments, after determining the set of virtual vehicle movements, the processor may provide data indicative of the set of virtual vehicle movements to the user interface. In turn, the user interface may display the set of virtual vehicle movements in association with the virtual trip. In some examples, the user interface may periodically or continuously display and update the virtual trip according to the determined set of virtual vehicle movements. In embodiments, the operator of the vehicle may view the virtual trip displayed by the user interface, as well as any vignettes or games included therein. By viewing the virtual trip, the operator may be inclined or motived to adjust real-world vehicle operating behavior, especially to improve aspects or areas that may need improvement. For example, if the operator notices that the virtual operator is prone to sudden or hectic lane changes, the operator may make an effort to execute smoother real-life lane changes. As an additional example, if the operator notices that the virtual operator speeds through virtual school zones, the operator may make an effort to slow down through real-life school zones. In some embodiments, if the virtual trip is associated with a game or challenge, the processor may determine a virtual reward based on the virtual operation of the virtual vehicle in association with the virtual trip, such as by determining that the virtual vehicle has achieved a virtual goal within the virtual environment (where the virtual reward may correspond to the virtual goal). Additionally, the processor may apply the virtual reward to an account of the operator. In various embodiments, the user interface may display, such as upon completion of a real-world trip, a virtual trip summary of an associated virtual trip, where the virtual trip summary may contain scores, points, achievements, or the like, which may be associated with any vignettes or games included in the virtual trip. Additional or alternatively, the virtual trip summary may contain ratings for certain virtual vehicle operation characteristics for the corresponding virtual driver, which may correspond to the vehicle operation characteristics for the operator included in the data model. Accordingly, the operator may review the virtual trip summary and be motivated to modify or improve any real-world driving behaviors in response to reviewing the virtual trip summary. Examples of Various Embodiments of the Present Disclosure According to various embodiments, a computer-implemented method for updating a character profile of a virtual character of a telematics-based game, the method comprising: generating, based at least in part upon a character profile of a virtual character, one or more virtual occurrences to be encountered by the virtual character; determining, based at least in part upon a plurality of virtual ratings of the virtual character, one or more outcomes associated with the one or more virtual occurrences; generating a virtual trip including the one or more virtual occurrences with the associated one or more outcomes; determining, based at least in part upon the one or more outcomes, a trip success prediction of the virtual character completing the virtual trip; determining, based at least in part upon the one or more outcomes, a predicted change in vehicle condition of a virtual vehicle, the predicted change in vehicle condition being indicative of a degree of damage to be sustained by the virtual vehicle during the virtual trip; presenting the trip success prediction, the predicted change in vehicle condition, a first user-selectable command, and a second user-selectable command to the user; upon receiving the user's selection of the first user-selectable command, updating the character profile by at least initiating the virtual trip with the virtual character; upon receiving the user's selection of the second user-selectable command: updating the one or more outcomes according to a predetermined adjustment; and updating the character profile by at least initiating the virtual trip with the virtual character based on the updated one or more outcomes; and presenting the updated character profile to the user. In some examples, the method is implemented according to method200ofFIG.2, and/or method400ofFIG.4, and/or configured to be implemented by system100ofFIG.1, system300ofFIG.3, device5000ofFIG.5, and/or system7000ofFIG.6. In some embodiments, each virtual occurrence of the one or more virtual occurrences includes one or more virtual obstacles to be encountered by the virtual character. In some embodiments, each outcome of the one or more outcomes correspond to a likelihood of success of the virtual character overcoming the one or more virtual obstacles in each virtual occurrence of the one or more virtual occurrences. In some embodiments, the virtual character has a plurality of virtual skills including a virtual steering skill, a virtual braking skill, a virtual speeding skill, and a virtual focus skill. In some embodiments, each virtual occurrence of the one or more virtual occurrences includes a steering difficulty corresponding to one or more virtual steering obstacles, a braking difficulty corresponding to one or more virtual braking obstacles, a speeding difficulty corresponding to one or more virtual speeding obstacles, and/or a focus difficulty corresponding to one or more virtual focus obstacles. In some embodiments, determining the one or more outcomes includes determining the one or more outcomes based at least in part upon the steering difficulty, the braking difficulty, the speeding difficulty, the focus difficulty, a virtual steering rating of the virtual steering skill, a virtual braking rating of the virtual braking skill, a virtual speeding rating of the virtual speeding skill, and/or a virtual focus rating of the virtual focus skill. In some embodiments, generating the one or more virtual occurrences includes generating the one or more virtual occurrences based further in part upon one or more unlocked regions of a virtual map of the telematics-based game. In some embodiments, updating the character profile includes updating a vehicle condition of the virtual vehicle based on the predicted change in vehicle condition. In some embodiments, presenting the updated character profile includes presenting the updated vehicle condition of the virtual vehicle. According to various embodiments, a system for updating a character profile of a virtual character of a telematics-based game, the system comprising: a virtual occurrence generating module configured to generate, based at least in part upon a character profile of a virtual character, one or more virtual occurrences to be encountered by the virtual character; an outcome determining module configured to determine, based at least in part upon a plurality of virtual ratings of the virtual character, one or more outcomes associated with the one or more virtual occurrences; a virtual trip generating module configured to generate a virtual trip including the one or more virtual occurrences with the associated one or more outcomes; a trip success prediction module configured to determine, based at least in part upon the one or more outcomes, a trip success prediction of the virtual character completing the virtual trip; a vehicle condition module configured to determine, based at least in part upon the one or more outcomes, a predicted change in vehicle condition, the predicted change in vehicle condition being indicative of a degree of damage to be sustained by the virtual vehicle during the virtual trip; a presenting module configured to present the trip success prediction, the predicted change in vehicle condition, a first user-selectable command, and a second user-selectable command to the user; and a character profile updating module configured to update, upon receiving the user's selection of the first user-selectable command, the character profile by at least initiating the virtual trip with the virtual character; wherein the outcome determining module is further configured to update, upon receiving the user's selection of the second user-selectable command, the one or more outcomes according to a predetermined adjustment; wherein the character profile updating module is further configured to update, upon receiving the updated one or more outcomes, the character profile by at least initiating the virtual trip with the virtual character based on the updated one or more outcomes; and wherein the presenting module is further configured to present the updated character profile to the user. In some examples, the system is configured accordingly to system100ofFIG.1, system300ofFIG.3, device5000ofFIG.5, and/or system7000ofFIG.6, and/or configured to perform method200ofFIG.2, and/or method400ofFIG.4. In some embodiments, the outcome determining module is configured to determine the one or more outcomes based at least in part upon the steering difficulty, the braking difficulty, the speeding difficulty, the focus difficulty, a virtual steering rating of the virtual steering skill, a virtual braking rating of the virtual braking skill, a virtual speeding rating of the virtual speeding skill, and/or a virtual focus rating of the virtual focus skill. In some embodiments, the virtual occurrence generating module is configured to generate the one or more virtual occurrences based further in part upon one or more unlocked regions of a virtual map of the telematics-based game. In some embodiments, the character profile updating module is configured to update a vehicle condition of the virtual vehicle based on the predicted change in vehicle condition. In some embodiments, the presenting module is further configured to present the updated vehicle condition of the virtual vehicle. In various embodiments, a non-transitory computer-readable medium with instructions stored thereon, that upon execution by a processor, causes the processor to perform: generating, based at least in part upon a character profile of a virtual character, one or more virtual occurrences to be encountered by the virtual character; determining, based at least in part upon a plurality of virtual ratings of the virtual character, one or more outcomes associated with the one or more virtual occurrences; generating a virtual trip including the one or more virtual occurrences with the associated one or more outcomes; determining, based at least in part upon the one or more outcomes, a trip success prediction of the virtual character completing the virtual trip; determining, based at least in part upon the one or more outcomes, a predicted change in vehicle condition of a virtual vehicle, the predicted change in vehicle condition being indicative of a degree of damage to be sustained by the virtual vehicle during the virtual trip; presenting the trip success prediction, the predicted change in vehicle condition, a first user-selectable command, and a second user-selectable command to the user; upon receiving the user's selection of the first user-selectable command, updating the character profile by at least initiating the virtual trip with the virtual character; upon receiving the user's selection of the second user-selectable command: updating the one or more outcomes according to a predetermined adjustment; and updating the character profile by at least initiating the virtual trip with the virtual character based on the updated one or more outcomes; and presenting the updated character profile to the user. In some examples, the non-transitory computer-readable medium, upon execution by a processor associated with system100ofFIG.1, system300ofFIG.3, device5000ofFIG.5, and/or system7000ofFIG.6, causes the corresponding system to perform method200ofFIG.2, and/or method400ofFIG.4. Examples of Some Embodiments of the Present Disclosure According to various embodiments, a computer-implemented method for training a virtual character of a telematics-based game includes: receiving telematics data associated with one or more real trips during which a user operated a real vehicle; determining, based at least in part upon the telematics data, a plurality of skill points associated with a plurality of real skills exhibited by the user during the one or more real trips; receiving, from the user, a selection of a virtual character, the virtual character having a character profile and a plurality of virtual ratings associated with a plurality of virtual skills; training the virtual character by at least updating, based at least in part upon the plurality of skill points, the plurality of virtual ratings; generating, based at least in part upon the character profile, one or more virtual occurrences to be encountered by the virtual character; determining, based at least in part upon the updated plurality of virtual ratings, one or more outcomes associated with the one or more virtual occurrences; updating the character profile by at least applying the one or more virtual occurrences based on the associated one or more outcomes to the virtual character; and presenting the updated character profile to the user. In some embodiments, each virtual occurrence of the one or more virtual occurrences includes one or more virtual obstacles to be encountered by the virtual character. In some embodiments, each outcome of the one or more outcomes correspond to a likelihood of success of the virtual character overcoming the one or more virtual obstacles in each virtual occurrence of the one or more virtual occurrences. In some embodiments, the plurality of real skills includes a real steering skill, a real braking skill, a real speeding skill, and/or a real focus skill. In some examples, the plurality of virtual skills includes a virtual steering skill, a virtual braking skill, a virtual speeding skill, and/or a virtual focus skill. In some embodiments, each virtual occurrence of the one or more virtual occurrences includes a steering difficulty corresponding to one or more virtual steering obstacles, a braking difficulty corresponding to one or more virtual braking obstacles, a speeding difficulty corresponding to one or more virtual speeding obstacles, and/or a focus difficulty corresponding to one or more virtual focus obstacles. In some embodiments, determining the one or more outcomes includes determining the one or more outcomes based at least in part upon the steering difficulty, the braking difficulty, the speeding difficulty, the focus difficulty, a virtual steering rating of the virtual steering skill, a virtual braking rating of the virtual braking skill, a virtual speeding rating of the virtual speeding skill, and/or a virtual focus rating of the virtual focus skill. In some embodiments, generating the one or more virtual occurrences includes generating the one or more virtual occurrences based further in part upon one or more unlocked regions of a virtual map of the telematics-based game. In some embodiments, updating the character profile includes updating a vehicle condition of a virtual vehicle associated with the virtual character. In some examples, the vehicle condition is indicative of a degree of damage sustained by the virtual vehicle during the one or more virtual occurrences based on the associated one or more outcomes. In some embodiments, presenting the updated character profile includes presenting the updated vehicle condition of the virtual vehicle to the user. In some embodiments, training the virtual character includes: updating, based at least in part upon the plurality of skill points, a plurality of fill-levels corresponding to the plurality of virtual skills; and increasing one or more virtual ratings of the plurality of virtual ratings upon any of the fill-levels of the plurality of fill-levels exceeding one or more predetermined fill targets. In various embodiments, a system for training a virtual character of a telematics-based game includes: a data receiving module configured to receive telematics data associated with one or more real trips during which a user operated a real vehicle; a skill point determining module configured to determine, based at least in part upon the telematics data, a plurality of skill points associated with a plurality of real skills exhibited by the user during the one or more real trips; a user input module configured to receive, from the user, a selection of a virtual character, the virtual character having a character profile and a plurality of virtual ratings associated with a plurality of virtual skills; a character training module configured to train the virtual character by at least updating, based at least in part upon the plurality of skill points, the plurality of virtual ratings; a virtual occurrence generating module configured to generate, based at least in part upon the character profile, one or more virtual occurrences to be encountered by the virtual character; an outcome determining module configured to determine, based at least in part upon the updated plurality of virtual ratings, one or more outcomes associated with the one or more virtual occurrences; a character profile updating module configured to update the character profile by at least applying the one or more virtual occurrences based on the associated one or more outcomes to the virtual character; and a presenting module configured to present the updated character profile to the user. In some embodiments, the outcome determining module is configured to determine the one or more outcomes based at least in part upon: a plurality of occurrence difficulties including a steering difficulty associated with one or more virtual steering obstacles, a braking difficulty associated with one or more virtual braking obstacles, a speeding difficulty associated with one or more virtual speeding obstacles, and/or a focus difficulty associated with one or more virtual focus obstacles; and the plurality of virtual ratings corresponding to the plurality of virtual skills, the plurality of virtual ratings including a virtual steering rating of a virtual steering skill, a virtual braking rating of a virtual braking skill, a virtual speeding rating of a virtual speeding skill, and/o a virtual focus rating of a virtual focus skill. In some embodiments, the virtual occurrence generating module is configured to generate the one or more virtual occurrences based further in part upon one or more unlocked regions of a virtual map of the telematics-based game. In some embodiments, the character profile updating module is configured to update a vehicle condition of a virtual vehicle associated with the virtual character, the vehicle condition indicative of a degree of damage sustained by the virtual vehicle during the one or more virtual occurrences based on the associated one or more outcomes. In some embodiments, the presenting module is configured to present the updated vehicle condition of the virtual vehicle to the user. In some embodiments, the character training module is configured to: update, based at least in part upon the plurality of skill points, a plurality of fill-levels corresponding to the plurality of virtual skills; and increase one or more virtual ratings of the plurality of virtual ratings upon any of the fill-levels of the plurality of fill-levels exceeding one or more predetermined fill targets. In various embodiments, a non-transitory computer-readable medium with instructions stored thereon, that upon execution by a processor, causes the processor to perform: receiving telematics data associated with one or more real trips during which a user operated a real vehicle; determining, based at least in part upon the telematics data, a plurality of skill points associated with a plurality of real skills exhibited by the user during the one or more real trips; receiving, from the user, a selection of a virtual character, the virtual character having a character profile and a plurality of virtual ratings associated with a plurality of virtual skills; training the virtual character by at least updating, based at least in part upon the plurality of skill points, the plurality of virtual ratings; generating, based at least in part upon the character profile, one or more virtual occurrences to be encountered by the virtual character; determining, based at least in part upon the updated plurality of virtual ratings, one or more outcomes associated with the one or more virtual occurrences; updating the character profile by at least applying the one or more virtual occurrences based on the associated one or more outcomes to the virtual character; and presenting the updated character profile to the user. According to various embodiments, a computer-implemented method for training a virtual character of a telematics-based game includes: receiving, from the user, a selection of a virtual character, the virtual character having a character profile and a plurality of virtual ratings associated with a plurality of virtual skills; generating, based at least in part upon the character profile, one or more virtual occurrences; determining, based at least in part upon the plurality of virtual ratings, one or more outcomes associated with the one or more virtual occurrences; initiating a virtual trip, the virtual trip including the one or more virtual occurrences to be encountered by the virtual character; receiving, in real-time or near real-time with a real trip, telematics data associated with the real trip, the real trip being in process and traveled by a real vehicle operated by the user; determining, based at least in part upon the telematics data, one or more real obstacles encountered by the user during the real trip; determining, based at least in part upon the telematics data, one or more performances indicative of how proficient the user operated the real vehicle upon encountering the one or more real obstacles; determining, based at least in part upon the one or more performances, one or more skill points associated with a plurality of real skills; training the virtual character by at least updating, based at least in part upon the plurality of skill points, the plurality of virtual ratings; updating, based at least in part upon the updated plurality of virtual ratings, the one or more outcomes; and upon completion of the real trip: updating the character profile based at least in part upon the one or more virtual occurrences and the associated updated one or more outcomes; and presenting the updated character profile to the user. In some embodiments, determining one or more real obstacles includes: determining one or more real steering obstacles; determining one or more real braking obstacles; determining one or more real speeding obstacles; and/or determining one or more real focus obstacles. In some embodiments, determining one or more performances includes: determining one or more steering performances indicative of how proficient the user was at steering the real vehicle upon encountering the one or more real steering obstacles; determining one or more braking performances indicative of how proficient the user was at decelerating the real vehicle upon encountering the one or more braking steering obstacles; determining one or more speeding performances indicative of how proficient the user was at accelerating the real vehicle upon encountering the one or more braking steering obstacles; and/or determining one or more focus performances indicative of how proficient the user was at staying in focus on operating the real vehicle upon encountering the one or more braking steering obstacles. In some embodiments, determining one or more skill points includes: determining one or more steering skill points based at least in part upon the one or more steering performances; determining one or more braking skill points based at least in part upon the one or more braking performances; determining one or more speeding skill points based at least in part upon the one or more speeding performances; and/or determining one or more focus skill points based at least in part upon the one or more focus performances. In some embodiments, each virtual occurrence of the one or more virtual occurrences includes one or more virtual obstacles to be encountered by the virtual character during the virtual trip. In some embodiments, each outcome of the one or more outcomes correspond to a likelihood of success of the virtual character overcoming the one or more virtual obstacles in each virtual occurrence of the one or more virtual occurrences. In some embodiments, the plurality of real skills includes a real steering skill, a real braking skill, a real speeding skill, and/or a real focus skill; and the plurality of virtual skills includes a virtual steering skill, a virtual braking skill, a virtual speeding skill, and/or a virtual focus skill. In some embodiments, each virtual occurrence of the one or more virtual occurrences includes a steering difficulty corresponding to one or more virtual steering obstacles, a braking difficulty corresponding to one or more virtual braking obstacles, a speeding difficulty corresponding to one or more virtual speeding obstacles, and/or a focus difficulty corresponding to one or more virtual focus obstacles. In some embodiments, determining the one or more outcomes includes determining the one or more outcomes based at least in part upon the steering difficulty, the braking difficulty, the speeding difficulty, the focus difficulty, a virtual steering rating of the virtual steering skill, a virtual braking rating of the virtual braking skill, a virtual speeding rating of the virtual speeding skill, and/or a virtual focus rating of the virtual focus skill. In some embodiments, generating the one or more virtual occurrences includes generating the one or more virtual occurrences based further in part upon one or more unlocked regions of a virtual map of the telematics-based game. In some embodiments, updating the character profile includes updating a vehicle condition of a virtual vehicle associated with the virtual character, the vehicle condition being indicative of a degree of damage sustained by the virtual vehicle during the one or more virtual occurrences based on the associated one or more outcomes. In some embodiments, presenting the updated character profile includes presenting the updated vehicle condition of the virtual vehicle to the user. In some embodiments, training the virtual character includes: updating, based at least in part upon the plurality of skill points, a plurality of fill-levels corresponding to the plurality of virtual skills; and increasing one or more virtual ratings of the plurality of virtual ratings upon any of the fill-levels of the plurality of fill-levels exceeding one or more predetermined fill targets. According to various embodiments, a system for training a virtual character of a telematics-based game, the system comprising: a user input module configured to receive, from the user, a selection of a virtual character, the virtual character having a character profile and a plurality of virtual ratings associated with a plurality of virtual skills; a virtual occurrence generating module configured to generate, based at least in part upon the character profile, one or more virtual occurrences; an outcome determining module configured to determine, based at least in part upon the plurality of virtual ratings, one or more outcomes associated with the one or more virtual occurrences; a virtual trip initiating module configured to initiate a virtual trip, the virtual trip including the one or more virtual occurrences to be encountered by the virtual character; a data receiving module configured to receive, in real-time or near real-time with a real trip, telematics data associated with the real trip, the real trip being in process and traveled by a real vehicle operated by the user; a real obstacle determining module configured to determine, based at least in part upon the telematics data, one or more real obstacles encountered by the user during the real trip; a performance determining module configured to determine, based at least in part upon the telematics data, one or more performances indicative of how proficient the user operated the real vehicle upon encountering the one or more real obstacles; a skill point determining module configured to determine, based at least in part upon the one or more performances, one or more skill points associated with a plurality of real skills; a character training module configured to train the virtual character by at least updating, based at least in part upon the plurality of skill points, the plurality of virtual ratings; an outcome updating module configured to update, based at least in part upon the updated plurality of virtual ratings, the one or more outcomes; a character profile updating module configured to, upon completion of the real trip, update the character profile based at least in part upon the one or more virtual occurrences and the associated updated one or more outcomes; and a presenting module configured to present the updated character profile to the user. In some embodiments, the real obstacle determining module is configured to: determine one or more real steering obstacles; determine one or more real braking obstacles; determine one or more real speeding obstacles; and/or determine one or more real focus obstacles. In some embodiments, the performance determining module is configured to: determine one or more steering performances indicative of how proficient the user was at steering the real vehicle upon encountering the one or more real steering obstacles; determine one or more braking performances indicative of how proficient the user was at decelerating the real vehicle upon encountering the one or more braking steering obstacles; determine one or more speeding performances indicative of how proficient the user was at accelerating the real vehicle upon encountering the one or more braking steering obstacles; and/or determine one or more focus performances indicative of how proficient the user was at staying in focus on operating the real vehicle upon encountering the one or more braking steering obstacles. In some embodiments, the skill point determining module is configured to: determine one or more steering skill points based at least in part upon the one or more steering performances; determine one or more braking skill points based at least in part upon the one or more braking performances; determine one or more speeding skill points based at least in part upon the one or more speeding performances; and/or determine one or more focus skill points based at least in part upon the one or more focus performances. In some embodiments, the outcome determining module is configured to determine the one or more outcomes based at least in part upon the steering difficulty, the braking difficulty, the speeding difficulty, the focus difficulty, a virtual steering rating of the virtual steering skill, a virtual braking rating of the virtual braking skill, a virtual speeding rating of the virtual speeding skill, and/or a virtual focus rating of the virtual focus skill. In some embodiments, the virtual occurrence generating module is configured to generate the one or more virtual occurrences based further in part upon one or more unlocked regions of a virtual map of the telematics-based game. In some embodiments, the character profile updating module is configured to update a vehicle condition of a virtual vehicle associated with the virtual character. In some examples, the vehicle condition is indicative of a degree of damage sustained by the virtual vehicle during the one or more virtual occurrences based on the associated one or more outcomes. In some embodiments, the presenting module is configured to present the updated vehicle condition of the virtual vehicle to the user. In some embodiments, the character training module is configured to: update, based at least in part upon the plurality of skill points, a plurality of fill-levels corresponding to the plurality of virtual skills; and increase one or more virtual ratings of the plurality of virtual ratings upon any of the fill-levels of the plurality of fill-levels exceeding one or more predetermined fill targets. According to various embodiments, a non-transitory computer-readable medium with instructions stored thereon, that upon execution by a processor, causes the processor to perform: receiving, from the user, a selection of a virtual character, the virtual character having a character profile and a plurality of virtual ratings associated with a plurality of virtual skills; generating, based at least in part upon the character profile, one or more virtual occurrences; determining, based at least in part upon the plurality of virtual ratings, one or more outcomes associated with the one or more virtual occurrences; initiating a virtual trip, the virtual trip including the one or more virtual occurrences to be encountered by the virtual character; receiving, in real-time or near real-time with a real trip, telematics data associated with the real trip, the real trip being in process and traveled by a real vehicle operated by the user; determining, based at least in part upon the telematics data, one or more real obstacles encountered by the user during the real trip; determining, based at least in part upon the telematics data, one or more performances indicative of how proficient the user operated the real vehicle upon encountering the one or more real obstacles; determining, based at least in part upon the one or more performances, one or more skill points associated with a plurality of real skills; training the virtual character by at least updating, based at least in part upon the plurality of skill points, the plurality of virtual ratings; updating, based at least in part upon the updated plurality of virtual ratings, the one or more outcomes; and upon completion of the real trip: updating the character profile to reflect the one or more virtual occurrences and the associated updated one or more outcomes; and presenting the updated character profile to the user. One or More Examples of Machine Learning According to Various Embodiments According to some embodiments, a processor or a processing element may be trained using supervised machine learning and/or unsupervised machine learning, and the machine learning may employ an artificial neural network, which, for example, may be a convolutional neural network, a recurrent neural network, a deep learning neural network, a reinforcement learning module or program, or a combined learning module or program that learns in two or more fields or areas of interest. Machine learning may involve identifying and recognizing patterns in existing data in order to facilitate making predictions for subsequent data. Models may be created based upon example inputs in order to make valid and reliable predictions for novel inputs. According to certain embodiments, machine learning programs may be trained by inputting sample data sets or certain data into the programs, such as images, object statistics and information, historical estimates, and/or actual repair costs. The machine learning programs may utilize deep learning algorithms that may be primarily focused on pattern recognition and may be trained after processing multiple examples. The machine learning programs may include Bayesian Program Learning (BPL), voice recognition and synthesis, image or object recognition, optical character recognition, and/or natural language processing. The machine learning programs may also include natural language processing, semantic analysis, automatic reasoning, and/or other types of machine learning. According to some embodiments, supervised machine learning techniques and/or unsupervised machine learning techniques may be used. In supervised machine learning, a processing element may be provided with example inputs and their associated outputs and may seek to discover a general rule that maps inputs to outputs, so that when subsequent novel inputs are provided the processing element may, based upon the discovered rule, accurately predict the correct output. In unsupervised machine learning, the processing element may need to find its own structure in unlabeled example inputs. One or More Examples of Modules According to Various Embodiments Additionally, certain embodiments are described herein as including logic or a number of routines, subroutines, applications, or instructions. These may constitute either software (e.g., code embodied on a non-transitory, machine-readable medium) or hardware. In hardware, the routines, etc., are tangible units capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein. In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that may be permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that may be temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the term “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time. Hardware modules may provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it may be communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and may operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules. Similarly, the methods or routines described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented hardware modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment, or as a server farm), while in other embodiments the processors may be distributed across a number of locations. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations. Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information. Additional Considerations According to Various Embodiments For example, some or all components of various embodiments of the present disclosure each are, individually and/or in combination with at least another component, implemented using one or more software components, one or more hardware components, and/or one or more combinations of software and hardware components. As an example, some or all components of various embodiments of the present disclosure each are, individually and/or in combination with at least another component, implemented in one or more circuits, such as one or more analog circuits and/or one or more digital circuits. For example, while the embodiments described above refer to particular features, the scope of the present disclosure also includes embodiments having different combinations of features and embodiments that do not include all of the described features. As an example, various embodiments and/or examples of the present disclosure can be combined. Additionally, the methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by the device processing subsystem. The software program instructions may include source code, object code, machine code, or any other stored data that is operable to cause a processing system to perform the methods and operations described herein. Certain implementations may also be used, however, such as firmware or even appropriately designed hardware configured to perform the methods and systems described herein. The systems' and methods' data (e.g., associations, mappings, data input, data output, intermediate data results, final data results) may be stored and implemented in one or more different types of computer-implemented data stores, such as different types of storage devices and programming constructs (e.g., RAM, ROM, EEPROM, Flash memory, flat files, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, application programming interface). It is noted that data structures describe formats for use in organizing and storing data in databases, programs, memory, or other computer-readable media for use by a computer program. The systems and methods may be provided on many different types of computer-readable media including computer storage mechanisms (e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, DVD) that contain instructions (e.g., software) for use in execution by a processor to perform the methods' operations and implement the systems described herein. The computer components, software modules, functions, data stores and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that a module or processor includes a unit of code that performs a software operation, and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code. The software components and/or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand. The computing system can include client devices and servers. A client device and server are generally remote from each other and typically interact through a communication network. The relationship of client device and server arises by virtue of computer programs running on the respective computers and having a client device-server relationship to each other. This specification contains many specifics for particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations, one or more features from a combination can in some cases be removed from the combination, and a combination may, for example, be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Additionally, certain embodiments are described herein as including logic or a number of routines, subroutines, applications, or instructions. These may constitute either software (e.g., code embodied on a non-transitory, machine-readable medium) or hardware. In hardware, the routines, etc., are tangible units capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein. In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that may be permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that may be temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the term “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time. Hardware modules may provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it may be communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and may operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules. Similarly, the methods or routines described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented hardware modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment, or as a server farm), while in other embodiments the processors may be distributed across a number of locations. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations. Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information. Although specific embodiments of the present disclosure have been described, it will be understood by those of skill in the art that there are other embodiments that are equivalent to the described embodiments. Accordingly, it is to be understood that the present disclosure is not to be limited by the specific illustrated embodiments.
157,046
11857867
DETAILED DESCRIPTION OF NON-LIMITING EXAMPLE EMBODIMENTS A first embodiment provides a system that monitors gaming chips of a casino in real time and the system can handle an error when the error occurs by managing all gaming chips in the casino. The first embodiment further provides a system of continuous monitoring that, in addition to management of gaming chips, prevents losses because if a shuffled playing card package is once lost in a casino, someone may know the alignment thereof and the package may not be used in a game. A system that manages gaming chips according to the first embodiment of the present invention will be described below.FIG.1is an explanatory diagram providing an overview of a table game of casino according to an embodiment of the present invention. In the present embodiment, a game table21includes a betting area24where a betting person2places a gaming chip3as a bet and a chip tray17capable of housing a plurality of gaming chips to collect a lost chip3L and redeem a won chip3W after each game ends. Also, a card shooter apparatus25placed on the game table21, having a card reader that reads the number (rank) of a mark of the card6, and having a controller27that determines the winner according to rules of a table game based on information of the number (rank) of the card6successively read by the card shooter apparatus25is installed. An increase/decrease amount of the gaming chips3in the chip tray17before/after collection of the lost chip3L and redemption of the won chip3W can be calculated by comparing the total of the gaming chips3in the chip tray17before collection of the lost chip3L and redemption of the won chip3W and the total of the gaming chips3in the chip tray17after collection of the lost chip3L and redemption of the won chip3W. The total of the gaming chips3in the chip tray17before collection of the lost chip3L and redemption of the won chip3W and the total of the gaming chips3in the chip tray17after collection of the lost chip3L and redemption of the won chip3W can be detected by embedding RFID indicating its quantity in the gaming chip3and providing an RFID reader18in the chip tray17. FIG.2Ais a perspective view of a dedicated chip case100to house a plurality of gaming chips3used in casinos. The chip case100includes an upper portion101and a lower portion102. In the present embodiment, the upper portion101and the lower portion102are made of transparent resin. A light transmission portion that allows light to transmit may also be provided so as to be able to image housed gaming chips using a camera. Also in the present embodiment, the case100has a sealing structure made of the upper portion101and the lower portion102, but the light transmission portion may be in a perforated state. In the present embodiment, the case100has a shape in which five columns, in each of which 20 pieces of the gaming chip3are overlaid and housed, are formed in parallel and in the example ofFIG.2A, the cross section of each column is polygonal (octagonal) so that the upper portion101and the lower portion102roughly match the shape of a gaming chip C. A unique chip case ID code103is attached to the chip case100. The chip case ID code103is related to a chip ID code4of the gaming chip3housed in the chip case. FIG.2Bis a perspective view of a case100′ according to a modification. The case100′ is also made of an upper portion101′ and the lower portion102constructed of transparent resin. In the present modification, the top surface is formed from a flat surface. By changing to the flat surface, a linear shadow due to edges of a polygonal cross section of the case100′ does not appear in the camera so that information of the side face of the gaming chip3can correctly be identified in image analysis of a shot image of the camera. FIG.3is an explanatory diagram providing an overview of a management system of gaming chips installed beside the game table21and using a storage box200storing a plurality of the chip cases100in which the gaming chips3are housed. The gaming chips3for replenishment are housed in the storage box200while being put into the chip case100and when the gaming chips3in the chip tray17run short, a dealer11takes out the chip case100together with the gaming chips3for replenishment from the storage box200to set the chip case100to the chip tray17. When the gaming chips3in the chip tray17become excessive, the excessive gaming chips3can be put into the chip case100and housed in the storage box200. Thus, the storage box200is placed by the dealer11beside the game table21. When the chip cases100stored in the storage box200and housing the gaming chips3run short, as many the chip cases100as necessary can be moved from a cage (cashier)201in a casino for replenishment. When the chip cases100stored in the storage box200become excessive, the chip cases100that are excessive are moved to the cage201. The chip case ID code103is attached to the chip case100and the chip case ID code103attached to the chip case100is continuously read by one or a plurality of readers for reading chip case ID202installed inside the storage box200. The storage box200also includes one or a plurality of chip readers that reads the chip ID code4of all the gaming chips3stored. A control apparatus204has a function to output the total number of the gaming chips3stored in the storage box and all the chip ID codes4stored in the storage box by monitoring the chip ID code4read by a chip reader203. The control apparatus204has a function to output whether an increase/decrease amount or increase/decrease value of the gaming chips3placed in the chip tray17of the game table21or an increase/decrease amount or increase/decrease value of the gaming chips3placed in the cage201that manages the gaming chips3of a casino and an increase/decrease amount or increase/decrease value of the gaming chips3housed in the storage box200match. Whether the chip case100placed in the storage box200is inside the storage box200may be monitored by the control apparatus204at fixed intervals (for example, every one minute, every five minutes, every one hour or more). The storage box200may have the reader for reading chip case ID202to read the chip case ID code103of the chip case100arranged in a drawer205of the storage box200. The reader for reading chip case ID202and the chip reader203of the storage box200may be a bar code reader R (alternatively, an RFID tag reader or QR code (registered trademark) reader (not shown) may be used instead of the bar code reader R). The reader for reading chip case ID202and the chip reader203may be installed so as to be able to scan in the X direction and Y direction to read all ID codes of the chip cases100by a scan unit53installed in the drawer205. Also, a transmission unit206to transmit information obtained by the reader for reading chip case ID202and the chip reader203to the outside of the storage box200is provided. The storage box200has a lock unit207to prevent the chip case100from being taken out from the storage box200by opening the drawer205. The lock unit207is unlocked only while an authorized person of a casino puts in or takes out the chip case100from the storage box200. Only an authorized person of a casino can operate the lock unit207. The storage box200includes the lock unit207to prevent the drawer205from opening and the lock unit207may include a warning unit (may be wireless) to notify that the drawer205has opened. When a notification that the drawer205has opened is received (or when appropriate), the storage box200may be imaged by a nearest monitoring camera29to record taking in or out of the chip case100from the storage box200by an authorized person or others. By recording such behavior (images in which the storage box200is opened), the fact that the chip case100can be taken in or out from the storage box200only while the lock unit207is unlocked by an authorized person (while the drawer205is opened) can be confirmed. By monitoring such images, the presence of all the chip cases100inside the storage box200can be confirmed. The storage box200may have the plurality of readers202to read the chip case ID code103in an upper portion inside the storage box200. The storage box200has the drawer205and the chip case100can be taken in or out by opening or closing the drawer205. The storage box200includes the lock unit207to prevent illegal pilfering of the chip case100. Further, as another idea, the storage box200may include as many the readers202as the maximum number of the chip cases100that can be stored in an upper portion inside the storage box200. Next, a gaming chip according to an embodiment of the present invention will be described.FIG.4is a perspective view of a state in which the gaming chips3are stacked andFIG.5is a sectional side elevation of the gaming chip3. As shown inFIG.4, the gaming chip3has at least a 5-layer structure in which a common color layer122has printing123(such as 100 points) indicating the kind (value) of the gaming chip3done on the surface (the upper surface and the undersurface), a transparent layer120is provided on the outermost layer, and layers are thermocompression-bonded. These gaming chips3are formed by using a plastic material in a long and narrow shape, forming a closely adhering condition (such as a 5-layer structure) in which each layer (a designated color layer121, the common color layer122, and the transparent layer120) is thermocompression-bonded in a long state, and then stamping into a circular shape or rectangular shape by press or the like. An R finish (round angle) is given to edges of the transparent layer120in the outermost layer by designing dimensions of the die and punch of the mold to stamp when stamped by press. Further, the gaming chip3is provided with a mark M in UV ink or carbon black ink on the surface of the common color layer122. The mark M indicates genuineness of the gaming chip3and becomes visible when an ultraviolet ray (or an infrared ray) is applied thereto to indicate authenticity by its shape or a combination of numbers. The transparent layer120is thermocompression-bonded or coated as the outermost layer like covering the printing123and the mark M and the transparent layer120are embossed to prevent the gaming chips3from coming into close contact with each other. An R finish (R) is given to edges of the transparent layer120in the outermost layer where the printing123(such as 100 points) is done to prevent the surface of the common color layer122from appearing on the side face after being discolored during the stamping process of gaming chip3. Also, the R finish prevents hands and other gaming chips3from being damaged by otherwise remaining sharp edges of the gaming chip3. The designated color layer121may be formed from, as shown inFIG.5, a plurality (three layers inFIG.5) of layers colored in the designated color. The plurality (three layers inFIG.5) of layers colored in the designated color is thermocompression-bonded to each other and thus, the 3-layer structure is not visible as shown inFIG.5andFIG.5shows the three layers of the designated color layer121from a description viewpoint. Further, a hollow B is partially provided in the center layer of the three layers of the designated color layer121and an RFID tag125is contained therein. As shown inFIGS.4and5, the gaming chip3has a stacked multilayer structure and a striped pattern in a lamination direction is clearly formed on the side face so that, when compared with a conventional gaming chip, the color (the kind of gaming chip) of the designated color layer121and the number can be measured easily and correctly by the image analysis. The side face of the gaming chip3can be photographed by a camera so that the designated color layer121can clearly be identified. Further, if, in addition to the image analysis, an AI-utilizing computer or control system and deep learning (structure) technology are used, the analysis and determination of images can be made more correct. The AI-utilizing computer or control system and deep learning (structure) technology are already known and available to persons skilled in the art and so a detailed description thereof is omitted. Next, an overview of a management system of gaming chips and shuffled playing cards using the storage box according to a second embodiment of the present invention will be described. In the second embodiment of the present invention, both of gaming chips and packages of shuffled playing cards are managed. FIG.6is an explanatory diagram providing an overview of a package of shuffled playing cards used in a table game of a casino and aligned randomly being used in the casino according to an embodiment of the present invention. In the present embodiment, shuffled playing cards301S are packed as a package302and the package302is unpacked so that the shuffled playing cards301S can be used for games on the table and set to the card shooter apparatus25. During the game, the dealer11draws out cards301from the card shooter apparatus25and distributes the cards301to the game table21. Cards of each shuffled playing card301S with a predetermined number of decks (usually 6, 8, 9 or 10 decks) are produced so as to be randomly shuffled and uniquely randomly aligned individually and packed together with a package ID code304attached to the package302as a bar code303individually identifiable by the bar code reader R. RFID (or an RF tag) may be attached to the ID code304instead of or together with the bar code303. FIGS.7and8are explanatory diagrams providing an overview of a management system of gaming chips and packages of shuffled playing cards using the storage box according to the second embodiment of the present invention. The embodiment of the present invention provides a management system of the gaming chips3and the packages302of the shuffled playing cards301S to play a game (baccarat). The gaming chips3for replenishment are housed in the storage box200while being put into the chip case100and when the gaming chips3in the chip tray17run short, the dealer11takes out the chip case100together with the gaming chips3for replenishment from the storage box200to set the chip case100to the chip tray17. When the gaming chips3in the chip tray17become excessive, the excessive gaming chips3can be put into the chip case100and housed in the storage box200. The storage box200also houses the package302to be used for the next game and the dealer11takes out the package302to be used for the next game from the storage box200and sets the shuffled playing cards301S to the card shooter apparatus25. The chip case ID code103is attached to the chip case100and the chip case ID code103attached to the chip case100is continuously read by the reader for reading chip case ID202installed inside the storage box200. Also, the package ID code304is attached to the package302and the package ID code304attached to the package302is continuously read by a reader for reading package ID305installed inside the storage box200. The storage box200includes one or a plurality of the readers for reading package ID305that reads the playing card ID code of all stored shuffled playing cards and also one or a plurality of the chip readers202,203that reads the case ID code of all the stored chip cases. The control apparatus204has a function to output the total numbers of the shuffled playing cards301S and the chip cases100stored in the storage box200and all the package ID codes304and all the chip case ID codes103stored in the storage box200by monitoring the playing card ID code read by the reader for reading package ID305and the chip case ID code103read by the chip reader203. The control apparatus204has a function to grasp an increase/decrease of the chip cases100housed in the storage box200by periodically monitoring the case ID code103of the chip case100and, when the increase/decrease is grasped, to output the total amount after the increase/decrease of value of all the gaming chips3housed in the storage box200. Whether the chip case100and the package302placed in the storage box200are inside the storage box200may be monitored by the control apparatus204at fixed intervals (for example, every one minute, every five minutes, every one hour or more). The storage box200may have the reader for reading package ID305to read the ID code304of the package302of shuffled playing cards and the reader for reading chip case ID202to read the chip case ID code103of the chip case100arranged in the drawer205of the storage box200. The reader for reading package ID305and the reader for reading chip case ID202of the storage box200may be a bar code reader R or the monitoring camera29(alternatively, an RFID tag reader or QR code (registered trademark) reader (not shown) may be used instead of the bar code reader R). The reader for reading package ID305and the reader for reading chip case ID202may be installed so as to be able to scan in the X direction and Y direction to read all the ID codes304of the packages302by the scan unit53installed in the drawer205. Also, a transmission unit206to transmit information obtained by the reader for reading package ID305and the reader for reading chip case ID202to the outside of the storage box200is provided. The storage box200has the lock unit207to prevent the package302from being taken out from the storage box200by opening the drawer205. The lock unit207is unlocked only while an authorized person of a casino takes in or out the package302from the storage box200(the drawer205is opened). Only an authorized person of a casino can operate the lock unit207. The storage box200includes the lock unit207to prevent the drawer205from opening and the lock unit207may include a warning unit (may be wireless) to notify that the drawer205has opened. When a notification that the drawer205has opened is received (or when appropriate), the storage box may be imaged by the nearest monitoring camera29to record taking in or out of the package302of the chip case from the storage box200by an authorized person or others. By recording such behavior (images in which the storage box200is opened), the fact that the package302can be taken in or out from the storage box200only while the lock unit207is unlocked by an authorized person (while the drawer205is opened) can be confirmed. By monitoring such images, the presence of all the packages302inside the storage box200can be confirmed. FIG.9is a diagram providing an overview of a system that manages packages of shuffled playing cards and chip cases housing gaming chips according to the second embodiment of the present invention. As another embodiment of the storage box200, the storage box200may have a plurality of the readers for reading package ID305to read the package ID code304and a plurality of the readers for reading chip case ID202to read the chip case ID code103in an upper portion inside the storage box200. By moving each of the scan unit53arranged in an upper portion of the storage box200in the Y direction, each ID code reader moves in the Y direction to read all of the package ID code304of the package302in each column below each ID code reader and the chip case ID code103of the chip case100. The storage box200has a drawer and the package302and the chip case100can be taken in or out by opening or closing the drawer. The storage box200includes a lock unit56to prevent illegal pilfering of the package302and the chip case100. Further, as another idea, the storage box200may include as many ID code readers as the maximum numbers of the packages302and the chip cases100that can be stored in an upper portion inside the storage box200. The present embodiment relates to, like the second embodiment, improvements of technology to read ID of the gaming chip3and the playing card6in the storage box200. Incidentally, matters described in the first or second embodiment can also be applied to the third embodiment. When RFID tags are attached to different items such as gaming chips and playing cards for management, RFID tags of the same frequency are normally used and content of each item is written into the RFID tag. Accordingly, content of the item can be recognized by reading the RFID tag. In a clothing store, for example, RF tags using radio waves of the same frequency are attached to socks and shirts and information about whether an item is a shirt or socks (information indicating the type of an item) is written into each RF tag. Then, by reading the RF tag, whether the item is socks or a shirt can be recognized. For security items like gaming chips and playing cards used in casinos, however, the security level demanded may be different from item to item. Also, circumstances in which RFID tags of all kinds of security items can be read by the same reading device (RFID reader) is dangerous and it is desirable to use a separate reading device for each security item. The present embodiment is developed in view of the above circumstances, and an object thereof is to improve safety when a plurality of types of security items is managed by RFID. Hereinafter, a system according to the present embodiment will be described specifically with reference to the drawings. In the description that follows, the description is omitted when appropriate by attaching the same reference signs to the same elements as those in the above embodiments.FIG.10is a diagram showing the configuration of a system according to the present embodiment. A system500manages the package302of the shuffled playing cards301S and the gaming chips3. The system500includes the game table21to play card games and the storage box200to store the playing cards6and the gaming chips3used in card games. The game table21is formed linearly on a side corresponding to the dealer position where the dealer is positioned and formed like an elliptic curve on a side corresponding to the player position where players are positioned. The chip tray17to house the gaming chips3of the dealer is provided in front of the dealer position of the game table21. Games are played on the game table21using the playing cards6and the gaming chips3. The chip tray17is embedded in the game table21by a removable method. The dealer collects the gaming chips3bet by the losing player from the game table21and houses the gaming chips3in the chip tray17and then pays out the gaming chips3to the winning player from the chip tray17. The gaming chip3to be used is the same as that described in the first embodiment and contains the RFID tag125as a wireless tag and also has a striped pattern on the side face. In the RFID tag125, chip ID that uniquely identifies the gaming chip3and information indicating value of the gaming chip3are stored. Also, the color of the striped pattern on the side face indicates value of the gaming chip3. The card shooter apparatus25is installed on the game table21. The card shooter apparatus25is configured in the same manner as in the first embodiment and playing cards of the predetermined number of decks pulled out from the package302are housed in the card shooter apparatus25and taken out one by one from an outlet by the dealer to be submitted to a card game. Playing cards housed in the card shooter apparatus25are provided as the package or container (hereinafter, simply called “package”)302. Playing cards constituting the predetermined number of decks are shuffled randomly and individually constituted as the package302. The package302is configured in the same manner as in the first embodiment. In the present embodiment, an RFID tag306is attached to the package302as a wireless tag. Package ID that uniquely identifies each of the packages302is stored in the RFID tag306. Incidentally, the RFID tag306may be embedded or included in the package302. The storage box200has a cabinet form and has a plurality of drawers. In the present embodiment, an upper drawer is a chip drawer210as a chip storage box or chip storage means that houses the gaming chips3and a lower drawer is a card drawer220as a card storage box or card storage means that houses the packages302of shuffled playing cards. That is, the gaming chips3and the packages302are housed in different drawers of the storage box200. Also, the storage box200is integrally configured by including the chip drawer210as a chip storage box and the card drawer220as a card storage box. The chip drawer210stores a plurality of the gaming chips3used on the game table21. The card drawer220stores a plurality of the packages302of shuffled playing cards carried from a card room and inserted into the card shooter apparatus25on the game table21. The chip drawer210and the card drawer220include the lock unit207as an opening/closing lock apparatus. The storage box200is arranged in the dealer position under the game table21as a position easy to access from the dealer and difficult to access from players. Also, the storage box200is provided with the control unit204and the transmission unit206similar to those in the first embodiment. The control unit204is configured by a management program in the present embodiment being executed by a computer including a storage apparatus. The transmission unit206communicates with other devices by wire or by wireless. FIG.11is a perspective view of an example of the card drawer220. The card drawer220of the example inFIG.11has a size capable of housing three units of the package302in the width direction and three units in the depth direction, and maximally nine units of the package302. A card antenna602to read the RFID tag306attached to the package302is contained in the sidewall on the left and right of the card drawer220. The card antenna602may also be affixed to the inner surface of the sidewall. The storage box200further includes a reader for reading package ID601connected to the card antenna602and the reader for reading package ID601is connected to the control unit204. The reader for reading package ID601is an RFID reader and reads information stored in the RFID tag306attached to the package302via the card antenna602. The card antenna602extends in the depth direction of the sidewall and can read the RFID tag306of all the packages302housed in the card drawer220. The card RFID reader601, the card antenna602, and the RFID tag306attached to the package302constitute a card RFID system600. FIG.12is a plan view of another example of the card RFID system600. The card drawer220of the card RFID system600in this example has a size capable of housing three units of the package302in the width direction, six units in the depth direction, and maximally18units of the package302. A total of six units of the card antenna602, two units in the width direction and three units in the depth direction are provided on the undersurface of a member covering the card drawer220on the storage box200from above (a member partitioning the chip drawer210and the card drawer220). Using the six units of the card antenna602, the RFID tags306attached to 18 units of the package302that can be housed. The six units of the card antenna602are connected to the card RFID reader601as a reader for reading package ID. In this case, a plurality of the card RFID readers601is provided and one of the card RFID readers601may be connected to one of the card antennas602or one of the card RFID readers601may be connected to a plurality of the card antennas602. Thus, the RFID tags306of all the packages302housed in the card drawer220can be read by one or the plurality of card RFID readers601. In the example ofFIG.12, the card antenna602is provided in a plate member partitioning the chip drawer210and the card drawer220in the storage box200, but instead, the card antenna602may be provided at the bottom of the card drawer220in the same arrangement as inFIG.12. FIG.13is a plan view of an example of a chip RFID system700. In the example ofFIG.13, the chip RFID system700has a size capable of housing three units of the chip case100in the width direction and two units in the depth direction, and maximally six units of the chip case100. Incidentally, the chip case100is configured in the same manner as the chip case100in the first embodiment. That is, the chip case100has a plurality of columns housing the gaming chips3by being stacked in the thickness direction and the chip case100can house 100 pieces of the gaming chips3. A pair of left and right chip antennas702sandwiching the chip case100from both sides are provided in each housing position of the six units of the chip case100. The chip antenna702is provided on the undersurface of a member covering the chip drawer210(a top plate of the storage box200) in the storage box200and extends downward. The chip antenna702is connected to a chip RFID reader701as a reader for reading chip ID. In this case, a plurality of the chip RFID readers701is provided and one of the chip RFID readers701may be connected to one of the chip antennas702or one of the chip RFID readers701may be connected to a plurality of the chip antennas702. Thus, information stored in the RFID tags125of all the gaming chips3housed in the chip drawer210can be read by one or the plurality of chip RFID readers701. The chip RFID reader701, the chip antenna702, and the RFID tag125contained in the gaming chip3constitute a chip RFID system700. In the example ofFIG.13, the chip antenna702is provided in the top plate of the storage box200, but instead, the chip antenna702may also be provided on the upper surface of the bottom of the chip drawer210in the same arrangement as inFIG.13. Hereinafter, control of the control unit204will be described. First, the control unit204has a function similar to that in the first and second embodiments. The control unit204grasps the number of the packages302housed in the card drawer220of the storage box200and their package IDs based on read results of the RFID tag306attached to the package302by the card RFID system600. Also, the control unit204grasps the number of the gaming chips3housed in the chip drawer210of the storage box200, their chip IDs, and value thereof based on read results of the RFID tag125contained in the in the gaming chip3by the chip RFID system700. The control unit204further determines the total amount of value of the gaming chips3housed in the chip drawer210based on the number of each value of the gaming chips3housed in the chip drawer210. The card RFID system600and the chip RFID system700periodically read the RFID tag125and the RFID tag306at predetermined intervals and output read results to the control unit204together with table ID that identifies the game table21. The control unit204monitors read results of the card RFID system600and the chip RFID system700and, when read results vary, detects the variations and records the table ID and read results in the storage apparatus together with the relevant date and time. Instead, the control unit204may record all read results of the card RFID system600and the chip RFID system700in the storage apparatus together with the relevant date and time and table ID. Alternatively, the control unit204and the lock unit207may be linked so that the control unit204records read results when the lock unit207is unlocked. A warning unit (for example, a warning lamp or an alarm output speaker) may be connected to the control unit204. In such a case, all package IDs and chip IDs that can be detected are stored in the storage apparatus of the control unit204and the control unit204determines whether the read package ID or chip ID matches one of package IDs and chip IDs stored in the storage apparatus. If the read package ID or chip ID matches none of package IDs and chip IDs stored in the storage apparatus, the control unit204may control the warning unit to output a warning (for example, a warning lamp is turned on or an alarm is output from an alarm output speaker). If two units of the package302or more decrease at a time (the package302is normally fetched one unit at a time) or the number of the gaming chips3is not a multiple of 100 (100 pieces of the gaming chips3, which is the maximum number that can be housed, are normally housed in the chip case100before being housed in the chip drawer210), the control unit204detects such movement as illegal movement of the package302or the gaming chip3and may record the movement or output an alarm. In the storage box200, as described above, the housing location of the gaming chips3and that of the packages302are physically separated, but are only separated inside the storage box200and are not apart on the order of meters. Thus, it is necessary to avoid interference between the card RFID system600and the chip RFID system700. In the present embodiment, therefore, different frequencies are adopted for the card RFID system600and the chip RFID system700. A specific example is as follows: In the present embodiment, the electromagnetic induction type is adopted for the chip RFID system700and its frequency band used is the HF band (MODE3). The HF band (MODE3) is a short-wave band of 13.56 MHz. The chip antenna702is formed in a coil shape. Also, the RFID tag125contained in the gaming chip3is provided with a coil-shaped antenna. The antenna of the gaming chip3transmits/receives radio waves of the HF band to/from the chip antenna702and also obtains operating power of the RFID tag125by receiving radio waves of the HF band from the chip antenna702. The HF band has a short communication range and directivity and thus, the area to be read can be limited to a predetermined range and reading of the gaming chip3in a position that should not be read can intentionally be prevented. When the RFID tag125of the gaming chip3housed in the chip tray17is read, the RFID tag125of the gaming chip3on the game table21can be prevented from being read. Further, when the gaming chip3housed in the chip tray17is read, each column can be read while avoiding interference between columns by dividing the antenna for each column. The gaming chips3are used and managed by being stacked and by using the HF band, reading can be done even if the gaming chips3are stacked and a plurality of the RFID tags125are congested. On the other hand, the radio wave type is adopted for the card RFID system600. Its frequency band is the UHF band and ultra-high frequencies in the 900 MHz band are used. The card antenna602radiates radio waves to space. The RFID tag306is also provided with an antenna and radio waves radiated to space are received by this antenna. The UHF band (ultra-high frequency) has higher frequencies than the HF band (short-wave band) and thus, the wavelength becomes shorter, which is advantageous for miniaturization of the antenna. In addition, the UHF band generally has a longer communication range than the HF band. In the present embodiment, as shown inFIG.12, a patch antenna is used as the card antenna602of the card RFID system600using the UHF band and a dipole antenna is used for the RFID tag306of the package302. The RFID tag used in the UHF band is generally small and its memory capacity is small and so can be manufactured at low cost. The package302is disposed of after playing cards are used together with playing cards and thus, being at low cost is advantageous. Because the communication range of the UHF band is long, even if the packages302are put in a carton or further stacked on a pallet, the packages302contained in such a carton or pallet can be read together. FIG.14is an explanatory diagram showing movement of the cards6in a management system using the storage box according to the third embodiment of the present invention. The playing cards6are housed and stored in the package302in a card room. The package302includes the playing cards6(shuffled playing cards301S) of eight decks aligned in random order by being shuffled. The storage box200is provided for each of the game tables21. When the stock of the package302in the storage box200gets short, the storage box200is replenished with the package302from the card room. In such a case, the storage box200may be replenished with a pallet of a plurality of the packages302(for example, nine packages) from the card room. The card shooter apparatus25is installed on the game table21. Also, the game table21is provided with a disposal port28into which the playing card6to be disposed of is inserted. In a card game (baccarat in the present embodiment), the playing cards6are pulled out one by one from the card shooter apparatus25by the dealer and placed on the game table21. When one game ends, playing cards6aon the game table21used for the game are disposed of through the disposal port28. When a cut card is drawn from the card shooter apparatus25, playing cards6bremaining in the card shooter apparatus25are disposed of through the disposal port28. The playing cards6disposed of through the disposal port28are transported to the disposal location. According to the present embodiment, as described above, the card RFID system600and the chip RFID system700use mutually different frequencies to prevent interference of radio waves with each other and read errors due to interference of package ID attached to the package302and chip ID attached to the gaming chip3are minimized. To reliably prevent interference, a shielding means (for example, a shielding plate) that blocks radio waves may be provided between chip drawer210and the card drawer220. In the above embodiments, card ID that uniquely identifies the package302of shuffled playing cards is attached to the package302, but as a modification, in place thereof or in addition thereto, the card RFID system600may be constructed by causing each of the playing cards6to contain the RFID tag. In such a case, card ID that uniquely identifies each of the playing cards6is stored in the RFID tag contained in each of the playing cards6. In the above embodiments, chip ID that uniquely identifies the gaming chip3is attached to each of the gaming chips3, but as a modification, in place thereof or in addition thereto, the chip RFID system700may be constructed by attaching the RFID tag to the chip case100. In such a case, chip case ID that uniquely identifies the chip case is stored in the RFID tag attached to each of the chip cases100. Also in these modifications, mutual interference can be prevented by adopting different frequencies of radio waves used by the card RFID system600and the chip RFID system700. Further, in the above embodiments and their modifications, the RFID tag is caused to store code information that uniquely identifies an item (the package302, the gaming chip3and the like) to which the RFID tag is attached, but information stored in the RFID tag may be other information. For example, the RFID tag may be caused to store information indicating the type of an item to which the RFID tag is attached (for example, information indicating a package to the RFID tag306attached to the package302and information indicating a gaming chip to the RFID tag125contained in the gaming chip3). Also in this case, the numbers of the packages302and the playing cards6can be grasped by the card RFID system600and the numbers of the gaming chips3and the chip cases100can be grasped by the chip RFID system700. Also in the above embodiments, frequencies of the UHF band are used for the card RFID system600and frequencies of the HF band are used for the chip RFID system700, but as long as frequency bands used for the card RFID system600and the chip RFID system700are different, frequencies of radio waves used for the card RFID system600and the chip RFID system700are not limited to the above example. Also in the above embodiments, the card RFID system600adopts the radio wave type and the chip RFID system700adopts the electromagnetic induction type, but the types of the card RFID system600and the chip RFID system700are not limited to the above types and appropriate types may be adopted in accordance with the frequency band of radio waves to be used and other factors. To solve the above conventional problems, a system that manages packages of shuffled playing cards and gaming chips according to the present invention includes shuffled playing cards having playing cards constituting a predetermined number of decks shuffled in random order and integrally constituted individually as a cage that manages one container or package with a unique playing card ID code attached to the cage managing the container or package, a chip case housing gaming chips having a chip ID code and to which a case ID code is attached, a game table on which a game is played using the shuffled playing cards and the gaming chips, a storage box installed beside the game table to store a plurality of the shuffled playing cards carried from a card room and inserted into a card shooter apparatus on the game table and also to store a plurality of the chip cases housing the gaming chips used on the game table and including an opening/closing mechanism enabling taking out of the shuffled playing cards and the chip tray, and a control apparatus to manage the shuffled playing cards and the gaming chips, the storage box includes one or a plurality of card readers that reads playing card ID codes of all stored shuffled playing cards and also one or a plurality of chip readers that reads case ID codes of all stored chip cases, and the control apparatus has a function to output total numbers of the shuffled playing cards and the chip cases and also all the playing card ID codes and the case ID codes stored in the storage box by monitoring the playing card ID codes read by the card reader and the case ID codes read by the chip reader. Further, the storage box includes a lock unit configured to prevent taking out of the shuffled playing cards and the chip cases of gaming chips from the storage box. Further, the storage box may have a shuffled playing card storage box that stores the shuffled playing cards and a chip storage box that houses the gaming chips by allowing the gaming chips to be taken in or out independently. Further, the case ID code of the chip case is associated with the chip ID code of the gaming chip in the case and the control apparatus has a function to output a total amount of value of all the gaming chips housed in the storage box by acquiring all the case ID codes housed in the storage box. Further, the control apparatus has a function to grasp an increase/decrease of the chip cases housed in the storage box by periodically monitoring the case ID code of the chip case and, when the increase/decrease is grasped, to output the total amount after the increase/decrease of the value of all the gaming chips housed in the storage box. To solve the above conventional problems, a system that manages packages of shuffled playing cards and gaming chips according to the present invention may be configured as described below: A system including shuffled playing cards in which playing cards constituting a predetermined number of decks are shuffled in random order and which are integrally constituted individually as one container or package with a unique playing card ID code attached to the container or package, a chip case housing gaming chips having a chip ID code, a game table on which a game is played using the shuffled playing cards and the gaming chips, a storage box installed beside the game table to store a plurality of the shuffled playing cards carried from a card room and inserted into a card shooter apparatus on the game table and also to store a plurality of the chip cases housing the gaming chips used on the game table and including an opening/closing mechanism enabling taking out of the shuffled playing cards and the chip tray, and a control apparatus to manage the shuffled playing cards and the gaming chips, the storage box includes one or a plurality of card readers that reads playing card ID codes of all stored shuffled playing cards and also one or a plurality of chip readers that reads chip ID codes of all stored gaming chips, and the control apparatus has a function to output total numbers of the shuffled playing cards and the gaming chips and also all the playing card ID codes and the chip ID codes stored in the storage box by monitoring the playing card ID codes read by the card reader and the chip ID codes read by the chip reader. Further, the control apparatus has a function to output a total amount of value of all the gaming chips housed in the storage box by reading all the chip ID codes present in the storage box. Further, the storage box includes a lock unit configured to prevent taking out of the shuffled playing cards and the chip cases of gaming chips from the storage box. Further, the storage box has a shuffled playing card storage box that stores the shuffled playing cards and a chip storage box that houses the gaming chips by allowing the gaming chips to be taken in or out independently. Further, the control apparatus may have a function to grasp an increase/decrease of the gaming chips housed in the storage box by periodically monitoring the chip ID code of the gaming chip stored in the storage box and, when the increase/decrease is grasped, to output the total amount after the increase/decrease of the value of all the gaming chips housed in the storage box. To solve the above conventional problems, a storage box according to the present invention is a storage box that manages shuffled playing cards and gaming chips, wherein the storage box is carried is carried from a card room to store a plurality of shuffled playing cards and also makes available the shuffled playing cards by individually taking out and inserting the shuffled playing cards into a card shooter apparatus on a game table and further stores a plurality of chip cases housing gaming chips used on the game table to adjust a quantity of the gaming chips on the game table using the chip cases when the gaming chips on the game table are excessive or lacking in accordance with development of a game on the game table and also to be able to store the gaming chips that are excessive before being transferred to a cage that manages the gaming chips of a casino and includes an opening/closing mechanism arranged near the game table to enable taking out of the shuffled playing cards and gaming chips when necessary, the shuffled playing cards have playing cards constituting a predetermined number of decks shuffled in random order and are integrally constituted individually as one container or package with a unique playing card ID code attached to the container or package, the gaming chip has a chip ID code and is housed in the chip case to which a case ID code is attached, the storage box includes one or a plurality of card readers that reads the playing card ID code of all the shuffled playing card stored and also one or a plurality of chip readers that reads the case ID code of all the chip cases stored or the chip ID code of the gaming chips in the chip cases, and a control apparatus has a function to output total numbers of the shuffled playing cards and the chip cases stored in the storage box and also all the playing card ID codes and the case ID codes or chip ID codes stored in the storage box by monitoring the playing card ID codes read by the card reader and the case ID codes or chip ID codes read by the chip reader. Further, the storage box may include a lock unit configured to prevent taking out of the shuffled playing cards or the gaming chips from the storage box. Further, the storage box may include a shuffled playing card storage box that stores the shuffled playing cards and a chip storage box that houses the gaming chips by allowing the gaming chips to be taken in or out independently. Further, the control apparatus may have a function to output a total amount of value of all the gaming chips housed in the storage box by reading all the chip ID codes present in the storage box. Further, the case ID code of the chip case may be associated with the chip ID code of the gaming chip in the case and the control apparatus may have a function to output a total amount of value of all the gaming chips housed in the storage box based on the case ID code by acquiring all the case ID codes housed in the storage box. To solve the above conventional problems, a system that manages gaming chips according to the present invention includes a chip case that houses gaming chips having a chip ID code, a storage box that stores a plurality of the chip cases housing the gaming chips used on a game table, adjusts a quantity of the gaming chips on a chip float of the game table using the chip cases when the gaming chips placed on the chip float of the game table are excessive or lacking in accordance with development of a game on the game table and also is able to store the gaming chips that are excessive before being transferred to a cage that manages the gaming chips of a casino and includes an opening/closing mechanism arranged beside the game table to enable taking out of shuffled playing cards and the chip tray when necessary, the game table on which a game is played using the gaming chips, and a control apparatus to manage the gaming chips, wherein the storage box includes one or a plurality of chip readers that reads chip ID codes of all stored gaming chips, and the control apparatus has a function to output a total number of the gaming chips stored in the storage box and all the chip ID codes stored in the storage box by monitoring the chip ID code read by the chip reader. Further, the control apparatus may have a function to output a total amount of value of all the gaming chips housed in the storage box by reading all the chip ID codes present in the storage box. Further, the storage box may include a lock unit configured to prevent taking out of the chip cases of gaming chips from the storage box. Further, the control apparatus may have a function to grasp an increase/decrease of the gaming chips housed in the storage box by periodically monitoring the chip ID code of the gaming chip stored in the storage box and, when the increase/decrease is grasped, to output the total amount after the increase/decrease of the value of all the gaming chips housed in the storage box. Further, the control apparatus may have a function to output whether an increase/decrease amount or increase/decrease value of the gaming chips placed on the chip float of the game table or an increase/decrease amount or increase/decrease value of the gaming chips placed in the cage that manages the gaming chips of a casino and an increase/decrease amount or increase/decrease value of the gaming chips housed in the storage box match. Further, a case ID code may be attached to the chip case and associated with the chip ID code of the gaming chip in the case.
50,786
11857868
In one or more implementations, not all of the depicted components in each figure may be required, and one or more implementations may include additional components not shown in a figure. Variations in the arrangement and type of the components may be made without departing from the scope of the subject disclosure. Additional components, different components, or fewer components may be utilized within the scope of the subject disclosure. DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth to provide a full understanding of the present disclosure. It will be apparent, however, to one ordinarily skilled in the art that the embodiments of the present disclosure may be practiced without some of these specific details. In other instances, well-known structures and techniques have not been shown in detail so as not to obscure the disclosure. Controllers may be utilized by users to interact with software. The controllers may translate the tactile input from the user into commands for the software. Typically, the default controller configuration for software focuses on ease of learning, and generally is not optimized for skilled performance. However, users are not all alike, and the default controller settings may not be optimized for each user. Although controller settings may be adjusted, they often include myriad customization options, and are therefore too cumbersome for users to customize on their own. As a result, the user experience in interacting with the software may become limited, and the user may eventually quit interactions with the software altogether. Furthermore, controller assignments are no longer just straight forward button assignments and preferred layouts. With multiple analog inputs (e.g., triggers and thumb sticks) as well as multiple game/player states that exist (e.g., running, jumping, flying, driving, aiming, etc.), input mapping has become highly customizable and highly personalized. A user's controller layout (including analog sensitivities) and game settings (turning on/off certain features) can be a competitive advantage. In most competitive/e-sports style games, the default controls are almost never used by high level players. The defaults are designed to be simple and easy to pick up but usually have major disadvantages. For example, in a shooter with the jump button assigned to the face of the controller, a player cannot aim while jumping because the user's right thumb is needed for both. Additionally, joystick/thumb stick sensitivity almost always defaults quite low for ease of control, but a high sensitivity would allow players to turn around faster when approached from behind, for example. Aspects of the present disclosure address these issues by providing for systems and methods for automated controller configuration recommendations. In an aspect, a machine learning system may account for how users are interfacing with software (e.g., a simulation, a video game, a developer tool, etc.) through a controller (e.g., analog/digital controllers including a gamepad, keyboard and mouse, a control surface, a handicapped accessible controller, steering wheel, flight stick, pedals, etc.). The system may provide recommendations for configuration settings of the controller to aid each user's respective tendencies. Additionally, configuration profiles (e.g., user profiles) can be shared on a social media platform and updated over time. This allows for the users to follow each other through their profiles. For example, a user may follow their friends and/or favorite streamers, etc., and may receive notifications of controller configuration changes to those profiles. The user may then also incorporate and merge down the controller configuration changes to their own profile. The system may also account for configuration settings that are used by the most skilled users, and then makes a comparison to users who have similar tendencies to form recommendations. The system may further query the user for user approval for changes. The system may also be configured for automatic/dynamic adjustments, if desired by the user. The disclosed system addresses a problem in traditional controllers tied to computer technology, namely, the technical problem of adjusting controller settings to fit each user's tendencies. The disclosed system solves this technical problem by providing a solution also rooted in computer technology, namely, by providing for automated controller configuration recommendations. FIG.1illustrates an exemplary controller100, according to certain aspects of the present disclosure. The controller100may include a directional pad102, buttons104, a left joystick114, a right joystick116, a left bumper106, a right bumper108, a left trigger110, and a right trigger112. The controller100may also include a trackpad118and additional buttons (e.g., additional button120). In an implementation, the left joystick114and/or the right joystick116may be configured to be depressed downward to provide additional avenues of input from the joysticks114,116. According to aspects, the directional pad102may include buttons corresponding to up, down, left, and right. According to aspects, the buttons104may include at least one button or more. It is understood that although four buttons104are illustrated, more or less buttons104may be included without departing from the scope of the disclosure. According to aspects, the directional pad102, left joystick114, left bumper106, and left trigger110may be controlled by a user's left hand. For example, the user's left thumb may be used to control the left joystick114or the directional pad102, and the user's left index finger may be used to control the left bumper106and/or the left trigger110. According to aspects, the right joystick116, the buttons104, the right bumper108, and the right trigger112may be controlled by a user's right hand. For example, the user's right thumb may be used to control the right joystick116or the buttons104, and the user's right index finger may be used to control the right bumper108and/or the right trigger112. As described above, there are limitations to a user's ability to interact with the controller100. For example, the user's left thumb may be used to control either the left joystick114or the directional pad102, but not both at the same time. Similarly, the user's right thumb may be used to control either the right joystick116or the buttons104, but not both at the same time. Therefore, in certain scenarios, it would be extremely difficult for the user to direct their intent through the controller100because of these shortcomings in the controller design and/or button mapping. Additionally, the sensitivities of the joysticks114,116may not be optimized for the user, which may cause unintended errors by the user. According to aspects, the controller100may be utilized to interact with software. For example, the software may include video games, a flight simulator, a driving simulator, etc. It is understood that the software includes real world devices as well, including, but not limited to, car software, audio recording software, production software, etc. It is understood that the illustrated controller100is exemplary only, and other controllers may be included without departing from the scope of the disclosure. For example, the controller may be an analog/digital controller, and may include a gamepad, a footpad, a control surface, a navigation controller (e.g., for navigating a car, airplane, space ship, etc.), a handicapped accessible controller, steering wheel, flight stick, pedals, etc. It is further understood that each controller type may include similar shortcomings to those described above for the controller100. It is understood that controller inputs may be assigned to single and/or multiple functions based on contexts of the software that is running (e.g., a user that gets into a vehicle or opens a menu via object interactions would have different inputs/functions). It is further understood that the controllers may be digital devices (e.g., controller outputs to may be digital outputs), but some of the inputs may be analog in nature (e.g., input may be an analog input from a human user). FIG.2illustrates an exemplary graphical user interface (GUI)200for automatically adjusting controller settings (e.g., settings of a controller), according to certain aspects of the present disclosure. The GUI200may include a listing of controller settings202and a log204of implemented adjustments206(e.g., recommendations). In an implementation, the GUI200may include an option for a user to enable or disable automated adjustments to the settings of the controller. According to aspects, a machine learning system may account for how users are interfacing with software (e.g., a simulation, a video game, a developer tool, etc.) through a controller (e.g., an analog/digital controller including a gamepad, keyboard and mouse, a control surface, a handicapped accessible controller, steering wheel, flight stick, pedals, etc.). The system may provide recommendations206for configuration settings of the controller to aid each user's respective tendencies. In an implementation, machine learning may be utilized to build a model that maps player skill level/play style built from all gathered player telemetry and settings to recommend changes to their control setup to improve certain aspects of how they play. For example, user performance data may be gathered through telemetry of the software. Additionally, configuration profiles (e.g., user profiles) can be shared on a social media platform and updated over time. The system may also account for configuration settings that are used by the most skilled users, and then makes a comparison to users who have similar tendencies to form recommendations. For example, in a soccer video game, a skilled user may be associated with a particular team composition, and the skilled user may also have remapped controller assignments (e.g., reassigned buttons) that make it easier to play with that team more aggressively. As such, any player who has a similar team composition that also plays aggressively may receive a recommendation to similarly remap their controller assignments according to the skilled user's profile in order to improve their playstyle. As another example, in a first-person shooter game, it may be determined that a player is often shot from behind. As a result, the system may recommend increasing turning sensitivity. Furthermore, if the player's aim with recoil removed is often sporadic and/or off target, the system may recommend reduced sensitivity while aiming down iron sites. The system may also recommend, for a current ranking level, that most users changed their button mappings for a jumping action from a face button (e.g., button104) to a bumper (e.g., right bumper108), and so the player should also consider a similar change to button mappings. It is understood that the described techniques apply equally to configuring analog inputs (e.g., based on sensitivity curves, etc.) as well as the above-described layout configuration, which may also include digital inputs (e.g., remapping buttons/inputs). The system may also be utilized to account for in-game changes that impact configuration settings. For example, in a first-person shooter game, a sniper rifle may have higher recoil in a subsequent patch to the game. In order to compensate for this change, the system may recommend a change in input sensitivity. According to aspects, a configuration profile (e.g., user profile) may be generated for each user that includes the customized settings each user has for their controller. The user profile may include at least a skill level and an input tendency of the user. Each configuration profile may be shared to social media so that other users may search for and use profiles that are popular in their game/software communities. For example, these configuration profiles may be game setting specific and may naturally iterate over time so those changes can be pushed to other users' games/software automatically. Additionally, the integration with a social platform would aid in reinforcing the machine learning system so that it may stay updated regarding which configurations are most utilized and whether users are enjoying them. According to aspects, the user may be presented with a short questionnaire to aid the machine learning model in determining recommendations206to the user. For example, the user may specify a playstyle (e.g., offensive, defensive), preferred weapons, competitiveness, etc. From there, the user may begin interacting with the software (e.g., video game, simulator, etc.) and the machine learning system will make recommendations206accordingly. In an implementation, the user may be presented with each recommendation206and given the option whether to accept or deny the recommendation206. The log204may be updated to reflect each change that was made. The log204may also include a history of which recommendations were accepted or denied as well. With each choice by the user, the machine learning system may understand better how to adjust the controller settings to fit the tendencies of that specific user. According to aspects, the recommendations206may be incremental, so that the user is able to adjust to the new controller settings. For example, if the user is accustomed to a joystick sensitivity setting of 5, and the sensitivity is suddenly increased to 10, then the user will likely not be able to interact as efficiently with the software because the change is too large. Instead, the system may increase the sensitivity incrementally to allow the user time to become accustomed to the new settings. Eventually, the user may be able to have a sensitivity of 10. In an implementation, the adjustments may be implemented automatically through continuous monitoring so that the user slowly becomes more adept with each new adjustment over time. According to aspects, the recommendations206may be based on user performance. For example, player statistics, hit percentages, defensive tendencies, offensive tendencies, player ratios, etc., may be utilized for recommendations206. In an implementation, these factors that influence the reconfiguration of the controller settings may be displayed to the user through the GUI200. In an implementation, the system may be configured by the user to query the user for user approval for recommendations206. The system may also be configured to automatically/dynamically implement the recommendations206, if desired by the user. For example, the user may toggle an ON/OFF switch through the GUI200. The ON/OFF switch may control whether to query the user for approval (e.g., switch is OFF), or to automatically/dynamically implement the recommendations206(e.g., switch is ON). According to aspects, the recommendations206generated by the machine learning system may be utilized to improve accessibility to users with disabilities as well. For example, it may help users discover more suitable accessibility settings. Furthermore, it is understood that the recommendations206may be applied to any type of analog/digital controller, including, but not limited to, a gamepad, a footpad, a control surface, a navigation controller (e.g., for navigating a car, airplane, space ship, etc.), a handicapped accessible controller, steering wheel, flight stick, pedals, etc. In this way, the recommendations206may be applied in various contexts, including, but not limited to, video games, simulators, etc., for both disabled and non-disabled users. According to aspects, the machine learning system may include algorithms, including but not limited to, machine learning algorithms, if/then telemetry engines, etc. As described herein, some non-limiting examples of machine learning algorithms that can be used to generate the recommendations206may include supervised and non-supervised machine learning algorithms, including regression algorithms (such as, for example, Ordinary Least Squares Regression), instance-based algorithms (such as, for example, Learning Vector Quantization), decision tree algorithms (such as, for example, classification and regression trees), Bayesian algorithms (such as, for example, Naive Bayes), clustering algorithms (such as, for example, k-means clustering), association rule learning algorithms (such as, for example, Apriori algorithms), artificial neural network algorithms (such as, for example, Perceptron), deep learning algorithms (such as, for example, Deep Boltzmann Machine), dimensionality reduction algorithms (such as, for example, Principal Component Analysis), ensemble algorithms (such as, for example, Stacked Generalization), and/or other machine learning algorithms. FIG.3illustrates a system300configured for adjusting controller settings (e.g., settings of a controller), in accordance with one or more implementations. In some implementations, system300may include one or more computing platforms302. Computing platform(s)302may be configured to communicate with one or more remote platforms304according to a client/server architecture, a peer-to-peer architecture, and/or other architectures. Remote platform(s)304may be configured to communicate with other remote platforms via computing platform(s)302and/or according to a client/server architecture, a peer-to-peer architecture, and/or other architectures. Users may access system300via remote platform(s)304. Computing platform(s)302may be configured by machine-readable instructions306. Machine-readable instructions306may include one or more instruction modules. The instruction modules may include computer program modules. The instruction modules may include one or more of controller receiving module308, user profile determination module310, adjustment providing module312, approval receiving module314, setting adjusting module316, user performance data gathering module318, user profile comparing module320, setting sharing module322, and/or other instruction modules. Controller receiving module308may be configured to receive, through a controller associated with a user, controller input for software. By way of non-limiting example, the software may include at least one of a flight simulator, a driving simulator, or a video game. User profile determination module310may be configured to determine, based on the controller input, a user profile for the user including at least a skill level and an input tendency of the user. Adjustment providing module312may be configured to provide suggested adjustments to settings of the controller intended to improve performance of the user in relation to the software. The suggested adjustments may be incremental. The suggested adjustments may be based at least or in part on a machine learning model. By way of non-limiting example, the machine learning model may include at least one of a regression algorithm, an instance-based algorithm, a decision tree algorithm, a Bayesian algorithm, a clustering algorithm, an association rule learning algorithm, an artificial neural network algorithm, a deep learning algorithm, a dimensionality reduction algorithm, or an ensemble algorithm. The settings of the controller may include at least one of controller sensitivity or controller assignments. Adjustment providing module312may be configured to provide the suggested adjustments based on the comparing. Approval receiving module314may be configured to receive approval of the user to implement the suggested adjustments to the settings of the controller. Setting adjusting module316may be configured to adjust the settings of the controller based on the approval of the user. User performance data gathering module318may be configured to gather user performance data through telemetry of the software. User profile comparing module320may be configured to compare the user profile of the user with other user profiles of other users. Setting sharing module322may be configured to share the settings of the controller for the user with other users. According to aspects, the described systems may not be linked or connected for the system to function. For example, a car may have a trained model installed that does not communicate with any external network in order to function. In some implementations, by way of non-limiting example, the controller may be an analog controller and/or digital controller, and may include at least one of a gamepad, a footpad, a control surface, a navigation controller, a handicapped accessible controller, steering wheel, flight stick, or pedals. In some implementations, by way of non-limiting example, the controller may include at least one of a light sensor, an audio sensor, or a tactile sensor. The controller may also include embedded computing systems (e.g., such as for a car). In some implementations, computing platform(s)302, remote platform(s)304, and/or external resources324may be operatively linked via one or more electronic communication links. For example, such electronic communication links may be established, at least in part, via a network such as the Internet and/or other networks. It will be appreciated that this is not intended to be limiting, and that the scope of this disclosure includes implementations in which computing platform(s)302, remote platform(s)304, and/or external resources324may be operatively linked via some other communication media. A given remote platform304may include one or more processors configured to execute computer program modules. The computer program modules may be configured to enable an expert or user associated with the given remote platform304to interface with system300and/or external resources324, and/or provide other functionality attributed herein to remote platform(s)304. By way of non-limiting example, a given remote platform304and/or a given computing platform302may include one or more of a server, a desktop computer, a laptop computer, a handheld computer, a tablet computing platform, a NetBook, a Smartphone, a gaming console, and/or other computing platforms. External resources324may include sources of information outside of system300, external entities participating with system300, and/or other resources. In some implementations, some or all of the functionality attributed herein to external resources324may be provided by resources included in system300. Computing platform(s)302may include electronic storage326, one or more processors328, and/or other components. Computing platform(s)302may include communication lines, or ports to enable the exchange of information with a network and/or other computing platforms. Illustration of computing platform(s)302inFIG.3is not intended to be limiting. Computing platform(s)302may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to computing platform(s)302. For example, computing platform(s)302may be implemented by a cloud of computing platforms operating together as computing platform(s)302. Electronic storage326may comprise non-transitory storage media that electronically stores information. The electronic storage media of electronic storage326may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with computing platform(s)302and/or removable storage that is removably connectable to computing platform(s)302via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). Electronic storage326may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. Electronic storage326may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). Electronic storage326may store software algorithms, information determined by processor(s)328, information received from computing platform(s)302, information received from remote platform(s)304, and/or other information that enables computing platform(s)302to function as described herein. Processor(s)328may be configured to provide information processing capabilities in computing platform(s)302. As such, processor(s)328may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although processor(s)328is shown inFIG.3as a single entity, this is for illustrative purposes only. In some implementations, processor(s)328may include a plurality of processing units. These processing units may be physically located within the same device, or processor(s)328may represent processing functionality of a plurality of devices operating in coordination. Processor(s)328may be configured to execute modules308,310,312,314,316,318,320, and/or322, and/or other modules. Processor(s)328may be configured to execute modules308,310,312,314,316,318,320, and/or322, and/or other modules by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on processor(s)328. As used herein, the term “module” may refer to any component or set of components that perform the functionality attributed to the module. This may include one or more physical processors during execution of processor readable instructions, the processor readable instructions, circuitry, hardware, storage media, or any other components. It should be appreciated that although modules308,310,312,314,316,318,320, and/or322are illustrated inFIG.3as being implemented within a single processing unit, in implementations in which processor(s)328includes multiple processing units, one or more of modules308,310,312,314,316,318,320, and/or322may be implemented remotely from the other modules. The description of the functionality provided by the different modules308,310,312,314,316,318,320, and/or322described below is for illustrative purposes, and is not intended to be limiting, as any of modules308,310,312,314,316,318,320, and/or322may provide more or less functionality than is described. For example, one or more of modules308,310,312,314,316,318,320, and/or322may be eliminated, and some or all of its functionality may be provided by other ones of modules308,310,312,314,316,318,320, and/or322. As another example, processor(s)328may be configured to execute one or more additional modules that may perform some or all of the functionality attributed below to one of modules308,310,312,314,316,318,320, and/or322. The techniques described herein may be implemented as method(s) that are performed by physical computing device(s); as one or more non-transitory computer-readable storage media storing instructions which, when executed by computing device(s), cause performance of the method(s); or, as physical computing device(s) that are specially configured with a combination of hardware and software that causes performance of the method(s). FIG.4illustrates an example flow diagram (e.g., process400) for adjusting controller settings, according to certain aspects of the disclosure. For explanatory purposes, the example process400is described herein with reference toFIGS.1-3. Further for explanatory purposes, the steps of the example process400are described herein as occurring in serial, or linearly. However, multiple instances of the example process400may occur in parallel. For purposes of explanation of the subject technology, the process400will be discussed in reference toFIGS.1-3. At step402, controller input for software is received through a controller associated with a user. At step404a user profile for the user is determined based on the controller input from the user. The user profile may include at least a skill level and an input tendency of the user. At step406, suggested adjustments to the controller settings are provided, which are intended to improve performance of the user in relation to the software. The controller settings may include at least one of controller sensitivity or controller button assignments. At step408, approval is received of the user to implement the suggested adjustments to the controller settings. At step410, the controller settings are adjusted based on the approval of the user. For example, as described above in relation toFIGS.1-3, at step402, controller input is received (e.g., by controller receiving module308) for software through a controller100associated with a user. At step404a user profile for the user is determined (e.g., by user profile determination module310) based on the controller input from the user. The user profile may include at least a skill level and an input tendency of the user. At step406, suggested adjustments206to settings of the controller are provided (e.g., through GUI200), which are intended to improve performance of the user in relation to the software. The settings202of the controller100may include at least one of controller sensitivity or controller button assignments (e.g., controller settings202). At step408, approval is received of the user to implement the suggested adjustments206to the settings202of the controller100. At step410, the settings202of the controller100are adjusted based on the approval of the user. According to an aspect, the controller may be an analog controller and/or digital controller (e.g., analog/digital controller) that includes at least one of a gamepad, a footpad, a control surface, a navigation controller, a handicapped accessible controller, steering wheel, flight stick, or pedals. According to an aspect, the software includes at least one of a flight simulator, a driving simulator, or a video game. According to an aspect, the suggested adjustments are incremental. According to an aspect, the controller comprises at least one of a light sensor, an audio sensor, or a tactile sensor. According to an aspect, the suggested adjustments are based at least in part on an algorithm. For example, the algorithm may include a machine learning model, an if/then telemetry engine, etc. According to an aspect, the machine learning model may include at least one of a regression algorithm, an instance-based algorithm, a decision tree algorithm, a Bayesian algorithm, a clustering algorithm, an association rule learning algorithm, an artificial neural network algorithm, a deep learning algorithm, a dimensionality reduction algorithm, or an ensemble algorithm. According to an aspect the process400may further include gathering user performance data through telemetry of the software. According to an aspect the process400may further include sharing the settings of the controller for the user with other users. According to an aspect the process400may further include comparing the user profile of the user with other user profiles of other users. According to an aspect the process400may further include providing the suggested adjustments based on the comparing. FIG.5is a block diagram illustrating an exemplary computer system500with which aspects of the subject technology can be implemented. In certain aspects, the computer system500may be implemented using hardware or a combination of software and hardware, either in a dedicated server, integrated into another entity, or distributed across multiple entities. Computer system500(e.g., server and/or client) includes a bus508or other communication mechanism for communicating information, and a processor502coupled with bus508for processing information. By way of example, the computer system500may be implemented with one or more processors502. Processor502may be a general-purpose microprocessor, a microcontroller, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a state machine, gated logic, discrete hardware components, or any other suitable entity that can perform calculations or other manipulations of information. Computer system500can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them stored in an included memory504, such as a Random Access Memory (RAM), a flash memory, a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable PROM (EPROM), registers, a hard disk, a removable disk, a CD-ROM, a DVD, or any other suitable storage device, coupled to bus508for storing information and instructions to be executed by processor502. The processor502and the memory504can be supplemented by, or incorporated in, special purpose logic circuitry. The instructions may be stored in the memory504and implemented in one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, the computer system500, and according to any method well-known to those of skill in the art, including, but not limited to, computer languages such as data-oriented languages (e.g., SQL, dBase), system languages (e.g., C, Objective-C, C++, Assembly), architectural languages (e.g., Java, .NET), and application languages (e.g., PHP, Ruby, Perl, Python). Instructions may also be implemented in computer languages such as array languages, aspect-oriented languages, assembly languages, authoring languages, command line interface languages, compiled languages, concurrent languages, curly-bracket languages, dataflow languages, data-structured languages, declarative languages, esoteric languages, extension languages, fourth-generation languages, functional languages, interactive mode languages, interpreted languages, iterative languages, list-based languages, little languages, logic-based languages, machine languages, macro languages, metaprogramming languages, multiparadigm languages, numerical analysis, non-English-based languages, object-oriented class-based languages, object-oriented prototype-based languages, off-side rule languages, procedural languages, reflective languages, rule-based languages, scripting languages, stack-based languages, synchronous languages, syntax handling languages, visual languages, wirth languages, and xml-based languages. Memory504may also be used for storing temporary variable or other intermediate information during execution of instructions to be executed by processor502. A computer program as discussed herein does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, subprograms, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. Computer system500further includes a data storage device506such as a magnetic disk or optical disk, coupled to bus508for storing information and instructions. Computer system500may be coupled via input/output module510to various devices. The input/output module510can be any input/output module. Exemplary input/output modules510include data ports such as USB ports. The input/output module510is configured to connect to a communications module512. Exemplary communications modules512include networking interface cards, such as Ethernet cards and modems. In certain aspects, the input/output module510is configured to connect to a plurality of devices, such as an input device514and/or an output device516. Exemplary input devices514include a keyboard and a pointing device, e.g., a mouse or a trackball, by which a user can provide input to the computer system500. Other kinds of input devices514can be used to provide for interaction with a user as well, such as a tactile input device, visual input device, audio input device, or brain-computer interface device. For example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback, and input from the user can be received in any form, including acoustic, speech, tactile, or brain wave input. Exemplary output devices516include display devices such as an LCD (liquid crystal display) monitor, for displaying information to the user. According to one aspect of the present disclosure, the above-described gaming systems can be implemented using a computer system500in response to processor502executing one or more sequences of one or more instructions contained in memory504. Such instructions may be read into memory504from another machine-readable medium, such as data storage device506. Execution of the sequences of instructions contained in the main memory504causes processor502to perform the process steps described herein. One or more processors in a multi-processing arrangement may also be employed to execute the sequences of instructions contained in memory504. In alternative aspects, hard-wired circuitry may be used in place of or in combination with software instructions to implement various aspects of the present disclosure. Thus, aspects of the present disclosure are not limited to any specific combination of hardware circuitry and software. Various aspects of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., such as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. The communication network can include, for example, any one or more of a LAN, a WAN, the Internet, and the like. Further, the communication network can include, but is not limited to, for example, any one or more of the following network topologies, including a bus network, a star network, a ring network, a mesh network, a star-bus network, tree or hierarchical network, or the like. The communications modules can be, for example, modems or Ethernet cards. Computer system500can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. Computer system500can be, for example, and without limitation, a desktop computer, laptop computer, or tablet computer. Computer system500can also be embedded in another device, for example, and without limitation, a mobile telephone, a PDA, a mobile audio player, a Global Positioning System (GPS) receiver, a video game console, and/or a television set top box. The term “machine-readable storage medium” or “computer readable medium” as used herein refers to any medium or media that participates in providing instructions to processor502for execution. Such a medium may take many forms, including, but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media include, for example, optical or magnetic disks, such as data storage device506. Volatile media include dynamic memory, such as memory504. Transmission media include coaxial cables, copper wire, and fiber optics, including the wires that comprise bus508. Common forms of machine-readable media include, for example, floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH EPROM, any other memory chip or cartridge, or any other medium from which a computer can read. The machine-readable storage medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. As the user computing system500reads game data and provides a game, information may be read from the game data and stored in a memory device, such as the memory504. Additionally, data from the memory504servers accessed via a network the bus508, or the data storage506may be read and loaded into the memory504. Although data is described as being found in the memory504, it will be understood that data does not have to be stored in the memory504and may be stored in other memory accessible to the processor502or distributed among several media, such as the data storage506. As used herein, the phrase “at least one of” preceding a series of items, with the terms “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list (i.e., each item). The phrase “at least one of” does not require selection of at least one item; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, the phrases “at least one of A, B, and C” or “at least one of A, B, or C” each refer to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C. To the extent that the terms “include”, “have”, or the like is used in the description or the claims, such term is intended to be inclusive in a manner similar to the term “comprise” as “comprise” is interpreted when employed as a transitional word in a claim. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration”. Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. A reference to an element in the singular is not intended to mean “one and only one” unless specifically stated, but rather “one or more”. All structural and functional equivalents to the elements of the various configurations described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and intended to be encompassed by the subject technology. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the above description. While this specification contains many specifics, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of particular implementations of the subject matter. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. The subject matter of this specification has been described in terms of particular aspects, but other aspects can be implemented and are within the scope of the following claims. For example, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed to achieve desirable results. The actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the aspects described above should not be understood as requiring such separation in all aspects, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Other variations are within the scope of the following claims.
45,893
11857869
The headings provided herein are for convenience only and do not necessarily affect the scope or meaning of the claimed embodiments. Further, the drawings have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be expanded or reduced to help improve the understanding of the embodiments. Moreover, while the disclosed technology is amenable to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and are described in detail below. The intention, however, is not to limit the embodiments described. On the contrary, the embodiments are intended to cover all modifications, equivalents, and alternatives falling within the scope of the embodiments as defined by the appended claims. DETAILED DESCRIPTION Overview A handheld controller with touch or proximity detection sensors is disclosed. In an embodiment, the handheld controller is configured to be held by a user's hand, the controller includes a main body, a handle extending from the main body, and a control button positioned on the main body or the handle. A detection sensor is on the handle and positioned to detect the presence of the finger or palm of a user's hand engaging the handle. The detection sensor can be a pressure sensor, a capacitive touch sensor, or a proximity sensors to detect the touch or spatial location of the user's fingers relative to the handle. One embodiment provides a handheld controller comprising a main body having a thumb surface, a thumbstick extending from the thumb surface, a surrounding ring portion extending from the main body, and a handle extending from the main body. The handle has a palm side and a finger side. A trigger button is positioned on the main body or handle, and a third-finger button is positioned on the finger side of the handle. A detection sensor is on the handle and positioned to detect the presence of the finger or palm of a user's hand engaging the handle and operative to output a signal corresponding to a presence of the user's hand relative to the handle. Another embodiment provides a handheld controller comprising a main body, and a handle extending from the main body, wherein the handle has a palm side and a finger side. A trigger button is positioned on the main body or handle. A first detection sensor is on the finger side of the handle and positioned to detect the presence of a first one of the user's fingers relative to the handle. A second detection sensor is on the finger side of the handle adjacent to the first detection sensor and positioned to detect the presence of a second one of the user's fingers relative to the handle. A third detection sensor is on the handle and positioned to detect the presence of a portion of the user's hand relative to the handle. General Description Various examples of the devices introduced above will now be described in further detail. The following description provides specific details for a thorough understanding and enabling description of these examples. One skilled in the relevant art will understand, however, that the techniques discussed herein may be practiced without many of these details. Likewise, one skilled in the relevant art will also understand that the technology can include many other features not described in detail herein. Additionally, some well-known structures or functions may not be shown or described in detail below so as to avoid unnecessarily obscuring the relevant description. The terminology used below is to be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of some specific examples of the embodiments. Indeed, some terms may even be emphasized below; however, any terminology intended to be interpreted in any restricted manner will be overtly and specifically defined as such in this section. FIG.1illustrates a pair of handheld controllers100according to a representative embodiment. The pair of handheld controllers100includes a right-hand controller102and a left-hand controller104. The primary structure of the right-hand controller102and the left-hand controller104when held adjacent to each other in a similar orientation, as illustrated, are substantially symmetric with respect to each other. Both the controllers102/104are described herein with respect to the right-hand controller102, as both controllers include the same or similar features, albeit in mirror image. The right-hand controller102includes a main body106and a handle portion108extending from the main body106. In some embodiments, a surrounding ring portion110extends from the main body106. The controllers102/104can be part of a VR system10, such as the Rift™ available from Oculus™. As shown inFIG.2, the right-hand controller102includes a thumbstick112, a trigger button114and a third-finger button116. The main body106includes a thumb surface118from which the thumbstick112extends. The main body106may also include one or more buttons120and122positioned on the thumb surface118. In some embodiments, the thumb surface118is a substantially planar surface. The handle portion108extends from the main body106on a side generally opposite the trigger button114. The main body106and the handle portion108are ergonomically contoured such that a user's hand5can comfortably grasp the handheld controller102as illustrated. When the controller102is grasped, the user's thumb7(i.e., the first finger) is comfortably positionable above the main body106with the thumb7engaging on the thumbstick112. The user's second or index finger9is positioned on the trigger button114. The user's third or middle finger11operates the third-finger button116. The third-finger button116is operative to detect whether the user is grasping the handle portion108with his or her third-finger11. In some embodiments, the third-finger button116can detect various degrees of deflection corresponding to the force or pressure of a user's grip on the handle portion108. In some embodiments, the third-finger button116is active depending on the context of an associated virtual environment or game. In other embodiments, the third-finger button116is activated mechanically or by another sensor. One embodiment could include a palm sensor (e.g., analogous to a pistol grip safety or grip switch), such that when the palm sensor detects the user's hand, and the third-finger button116is released, an output signal indicates an “open-hand gesture.” In some embodiments, the handle portion108can include one or more detection sensors125positioned to detect the presence of the user's palm or a portion of a finger, indicating that the user is holding the handle portion108, indicating how the user is holding the handle portion, or how the user is moving his or her hand relative to the handle portion. For example, the detection sensor125can be a capacitive touch sensor on the handle portion, such as adjacent to the third finger button116or in a position for engagement by the user's fourth or fifth finger when grasping the handle. A detection sensor125can be positioned to be engaged by a portion of the user's second finger (i.e., index finger) or third finger (i.e., middle finger) that is on the handle portion108adjacent to the trigger button114or the third-finger button116, indicating the presence of the user's fingers on the handle portion108even if the associated finger has been lifted off of the trigger button114or the third finger button116. Detection sensors125can be provided on the handle portions corresponding to the position of all of the user's fingers grasping the handle. In one embodiment, one or more of the detection sensors125are proximity sensors configured to detect the spatial location of the user's fingers or hand relative to the handle portion108. For example, the proximity sensor125could be used to detect the presence of the user's finger and the separation distance between the respective finger and the surfaced of the handle portion108. The proximity sensors125can be configured to allow detection of movement of the user's fingers or other portions of the user's hand relative to the handle portion108. The detected separation distance and/or movement can be used in connection with signals, commands, or other control signals related to the hand shape or position of the user's hand or fingers relative to the handle portion108. In some embodiments, the handle portion108can include a combination of pressure sensors, capacitive touch sensors, and/or proximity sensors that provide signals to the VR system10, for example, to initiate a command or to replicate a hand configuration in a corresponding apparition or avatar. When the third-finger button116is depressed, the system registers that the user's hand is closed or grasped around the handle portion108. When the third-finger button116is not depressed, the system can indicate an open hand gesture. The presence of a gesture can be a signal to the VR system10to initiate a command or to include the gesture in a corresponding apparition or avatar. The third-finger button116allows a user to maintain a grip on the handle portion108while still being able to provide hand grip inputs to the VR system. In another embodiment, the third button on the handle is positioned for engagement by the user's ring or fourth finger or the pinkie or fifth finger, or a combination of the third, fourth and/or fifth fingers. In some embodiments, the thumbstick112, the trigger button114, the thumb surface118, and the buttons120and122can be configured to detect other hand and finger gestures as explained in U.S. patent application Ser. No. 14/939,470, titled “METHOD AND APPARATUS FOR DETECTING HAND GESTURES WITH A HANDHELD CONTROLLER,” filed Nov. 12, 2015, and U.S. patent application Ser. No. 14/975,049, titled HANDHELD CONTROLLER WITH ACTIVATION SENSORS, filed Dec. 18, 2015, all of which are hereby incorporated by reference in their entireties. With reference toFIG.3, the handle portion108includes a palm side124, which confronts the palm of the user's hand5, and a finger side126opposite the palm side124and generally confronts the fingers, such as the third-finger11, of the user's hand5. Accordingly, the third-finger button116is disposed on the finger side126of the handle portion108. As shown inFIG.4, the third-finger button116includes an arm128rotatably coupled to the main body106via a pivot shaft130extending along an axis A. With further reference toFIG.5, the pivot shaft130is mounted at an angle with respect to the main body106in clevis arms132and134extending from the main body106. In some embodiments, a torsion spring136is positioned about the pivot shaft130to return the arm128to the extended position and to provide tactile feedback to the user's third-finger11(seeFIG.2) in the form of a resistive force. As shown inFIG.6, the third-finger button116includes a detection feature, such as a magnet or other detectable member. In the illustrated embodiment, a magnet140is mounted on arm128. A sensor142is positioned inside the handle adjacent the magnet140. In some embodiments, the sensor142is a Hall effect sensor. A Hall effect sensor is a transducer that varies its output voltage in response to a magnetic field. Thus, as the magnet140moves closer to the sensor142, the output voltage varies. Accordingly, the third-finger button116is an analog button in that it can detect various degrees of deflection corresponding to the force of a user's grip on the handle portion108and output a signal corresponding to movement of the third-finger button116. In some embodiments, the magnet140and the Hall effect sensor142may be replaced by an on/off switch such as a miniature snap-action switch, for example. In some embodiments, movement of the third-finger button116can be detected with an inductive proximity sensor or other suitable type of proximity sensor. In some embodiments, the detection feature for use with a proximity sensor can be a location (e.g., target location) on the third-finger button116. FIG.7illustrates a handheld controller202according to a representative embodiment. The handheld controller202comprises a main body206, a trigger button210positioned on the main body206, and a handle portion208extending from the main body206on the side opposite the trigger button210. The handle portion208has a palm side224and a finger side226. A first pressure sensitive sheet or pad214is positioned on the palm side224of the handle portion208and a second pressure sensitive sheet or pad216is positioned on the finger side226. The pressure sensitive pads214/216are operative to detect compression of the pads caused by a user's fingers and/or palm, thereby registering the presence and/or strength of a user's grip around the handle portion208. In some embodiments, the handle portion208only includes one or other of the first and second pressure sensitive pads214/216. Remarks The above description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in some instances, well-known details are not described in order to avoid obscuring the description. Further, various modifications may be made without deviating from the scope of the embodiments. Accordingly, the embodiments are not limited except as by the appended claims. Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not for other embodiments. The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. It will be appreciated that the same thing can be said in more than one way. Consequently, alternative language and synonyms may be used for any one or more of the terms discussed herein, and any special significance is not to be placed upon whether or not a term is elaborated or discussed herein. Synonyms for some terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification, including examples of any term discussed herein, is illustrative only and is not intended to further limit the scope and meaning of the disclosure or of any exemplified term. Likewise, the disclosure is not limited to various embodiments given in this specification. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions, will control.
15,336
11857870
DESCRIPTION OF MAIN COMPONENT SYMBOLS 500—apparatus for game interference processing;510—determination module;520—sending module;530—discarding module;100—mobile terminal;110—RF circuit;120—memory;130—input unit;140—display unit;150—shooting unit;160—audio circuit;170—WiFi module;180—processor;190—power supply. DETAILED DESCRIPTION The technical scheme in the embodiments of the present application will be described with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, not all of them. The components of the embodiments of the present application generally described and illustrated in the drawings herein may be arranged and designed in various different configurations. Therefore, the following detailed description of the embodiments of the present application provided in the drawings is not intended to limit the scope of the claimed application, but represents selected embodiments of the present application. Based on the embodiments of the present application, all other embodiments obtained by those having ordinary skill in the art without creative labor belong to the protection scope of the present application. Embodiment One FIG.1is a flow diagram of a method for game interference processing according to embodiment one of the present application. The method is applied to a mobile terminal, which may be in a standby mode, a video mode and the like, and a SIM card currently used in the mobile terminal can support a PS service and a CS service. The mobile terminal may include any terminal equipment such as a computer (personal computer), a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a vehicle-mounted computer, and the like. In this embodiment, the SIM card may be a SIM card of operators such as China Mobile, China Unicom and China Telecom. In some other embodiments, with the development of mobile communication technology, the SIM card may also be a SIM card produced with the emerging technology. It is worth noting that the SIM card can support TD-LTE, FDD-LTE, TD-SCDMA, WCDMA, CDMA, CDMA2000, GSM and other standards. With the development of 5G technology and subsequent communication technology, the SIM card can also support other emerging standards. Among all current network standards, each operator is compatible with 3-4 network standards, for example, Mobile network supports GSM(2G), TD-SCDMA(3G), TD-LTE(4G) or FDD-LTE (4G); Unicorn network supports GSM(2G), WCDMA(3G) and FDD-LTE(4G) or TD-LTE (4g); Telecom network supports CDMA(2G), CDMA2000(3G) and FDD-LTE(4G). In the current mobile terminal, if there are 5G network, 4G network, 3G network and 2G network at the same time, the current SIM card will generally connect to the 5G network first. If the 5G network cannot be connected, the terminal will try to connect to the 4G network, the 3G network and the 2G network in turn. The method for game interference processing includes the following steps. At step S110, when the mobile terminal is in a game mode, whether a SIM card in a mobile terminal is registered in an IMS network is determined. The game mode is a mode in which any online game is in a running state. The mode in which any online game is in a running state in the mobile terminal is defined as the game mode. Due to online games have a high demand for mobile network speed, users usually connect to the Internet through 4G mobile network, 5G mobile network or higher-level and higher-speed mobile network generated with the development of mobile communication technology, and download resources such as scenes, sound effects and props of online games. The SIM card for any operator prestores the information of whether the mobile network corresponding to different network standards supports an IMS architecture or not. When the mobile terminal is in the game mode, the network standard accessed by the current SIM card in the game mode is acquired, and whether the mobile network corresponding to the current SIM card supports the IMS architecture is determined according to the prestored information of whether the mobile network corresponding to different network standards supports the IMS architecture or not. The IMS architecture has many characteristics, such as providing multimedia services, horizontal service structure, access independence and adopting Telecom “IT” protocol SIP. The IMS architecture is the architecture goal of the next generation mobile core network, which can well meet the corresponding requirements. In this embodiment, the current architecture supported by the network is the IMS architecture. In some other embodiments, besides the IMS architecture, other architectures with similar functions to the IMS architecture, such as the upgraded version and the optimized version of the IMS architecture, are also within the protection scope. When the network connected by the SIM card supports the IMS architecture, whether the SIM card is registered in the IMS network is determined. If the SIM card is not registered in the IMS network, proceed to step S120; and if the SIM card is registered in the IMS network, proceed to step S140. At step S120, if the current network does not support the IMS architecture, after receiving a CS paging from a network side, a protocol stack of the mobile terminal sends a response message of no expectation to fallback to the network side. Generally speaking, in absence of WiFi at present, in order to ensure fluency of online games, users often connect to the internet through 4G mobile network, 5G mobile network or higher-level and faster mobile networks generated with the development of science and technology. If the current network connected by the SIM card for connecting mobile network is 4G mobile network and the 4G network does not support the IMS architecture, the protocol stack, when receiving the CS paging from the network side, will fall back to 2G or 3G through CSFB (Circuit Switched Fallback) to respond, and the SIM card needs to be disconnected from the current connected 4G network and re-registered to the 3G network, resulting in the SIM card being not connected to the network within a time period ranging from disconnection with the 4G network to re-registration to the 3G network, and resulting in the interruption of PS service. In addition, after the current network connected by the SIM card falls back to the 3G network, the CS service and PS service corresponding to the SIM card are carried out under the 3G network. Since the speed of the 3G network is not enough to meet the user's demand for fluency in the game process, and the interface where the CS service is located pops up during the game, the user's experience during the game is poor. Therefore, after the mobile terminal receives the CS paging from the network side, the protocol stack sends a response message of no expectation to fallback of the current network to the network side, so that the network side can only page CS service in the current mobile network after receiving the response message, thus preventing the network side from constantly paging the mobile terminal for the user, resulting in signaling congestion on the network side. It is worth noting that it is also possible to determine whether the current network connected by the SIM card of the mobile terminal supports the IMS architecture after receiving the CS paging from the network side. At step S130, when receiving paging of the CS service in the current network from the network side, the protocol stack sends to the network side a notification message indicating that the current SIM card does not expect to continue to respond to the CS service, so that the network side discards the CS service according to the notification message, and controls the mobile terminal to stay in the game mode all the time. When the mobile terminal is in the game mode, the SIM card currently used is defined to support the PS service only, with the CS service being not supported. When the protocol stack receives paging of the CS service in the current network from the network side, since the current SIM card does not support the CS service in the game mode, the protocol stack sends a notification message indicating that the current SIM card supports the PS service rather than the CS service to the network side. After receiving the notification message, the network side discards the CS service and no longer paging the CS service in the current network. At step S140, the protocol stack makes a response to the paging through VOLTE. When the network to which the SIM card is currently accessed supports the IMS architecture, the protocol stack responds to the CS paging sent from the network side through VOLTE (Voice over LTE). However, since the currently used SIM card is predefined to support the PS service rather than the CS service when the mobile terminal is in the game mode, the protocol stack sends the notification message indicating that the current SIM card supports the PS service rather than the CS service to the network side. After receiving the notification message, the network side discards the CS service and no longer pages CS service in the current network. It is worth noting that when the currently accessed network supports other architectures, other services may also be used as a response to the CS paging sent from the network side. Embodiment Two FIG.2is a flow diagram of a method for game interference processing according to embodiment two of the present application. The method is applied to a mobile terminal, and a SIM card currently used in the mobile terminal can support a PS service and a CS service. The method includes the following steps. At step S210, a mobile terminal is switched to a game mode in response to an input operation of a user in a plurality of game mode options preset in the mobile terminal. In some embodiments, the plurality of game mode options are preset in the mobile terminal, and a current working mode of the mobile terminal can be switched to the game mode according to a trigger operation of a user. The game mode is a mode in which any online game is in a running state. The current mode may include a standby mode, a video mode, etc. Further, the mobile terminal includes an AP (Application Processor) and a protocol stack. The AP and the protocol stack are set separately, and operations performed by the two parts are executed independently. When one part changes, the other part can still run normally, which reduces the coupling of the system and increases the autonomy of user design. For example, as shown inFIG.3, 1 indicates that the user switches the mobile terminal to the game mode in response to the input operation of the user in the plurality of game mode options preset in the mobile terminal. When receiving the switching instruction of the user, the AP of the mobile terminal controls the mobile terminal to enter a predefined game mode and causes the protocol stack to execute the operation of the game mode. 2 indicates that after receiving the game mode instruction sent by the AP, the protocol stack feeds back the message of successful switching to the AP. 3 indicates that the network side pages the CS service to the protocol stack. At step S220, whether a SIM card in the mobile terminal is registered in an IMS network is determined. If the SIM card is not registered in the IMS network, it proceeds to step S230; and if the SIM card is registered in the IMS network, it proceeds to step S240. At step S230, whether the current SIM card is a Telecom SIM card is determined. The protocol stack determines whether the currently used SIM card is a Telecom SIM card. If the currently used SIM card is a Telecom SIM card, it proceeds to step S250; if the SIM card currently used is not a Telecom SIM card, it proceeds to step S260. It is worth noting that the AP of the mobile terminal can also obtain current running resources of the mobile terminal, and compare the current running resource information with the resource information corresponding to a pre-stored online game. If the current running resource information is consistent with the resource information corresponding to the pre-stored online game, it is determined that the current mobile terminal is running the online game, and the AP controls the mobile terminal to enter the predefined game mode and causes the protocol stack to execute the operation of the game mode. The resource information may include a resource name, a resource ID, and the like. At step S240, the protocol stack makes a response to a CS paging through VOLTE. At step S250, the protocol stack sets the SIM card to a mode of responding to the CS service only in the current network, and sends a response message of responding to the CS service only in the current network to the network side. In the Telecom SIM card, a mode of single card and dual standby is adopted. For example, the Telecom SIM card can receive the CS paging in both 4G network and 1× network, but can only receive the CS paging in the network corresponding to one system (4G or 1×) at a time point. When the SIM card currently used is a Telecom SIM card, the protocol stack sets the current Telecom SIM card to send a response message of responding to the CS service only in the current network to the network side. For example, as shown inFIG.3, after receiving the CS page from the network side, 4 may indicate that the protocol stack directly sets an ability of receiving messages of the telecom SIM card in the 1× mode as disable, that is, the telecom SIM card is set to the 4G only mode, so that the mobile terminal can only receive the CS paging in the 4G network. At step S260, the protocol stack sets a fallback flag bit as an unexpected fallback flag, and sends the set response message to the network side. If the SIM card currently used by the mobile terminal is not a Telecom SIM card, that is, the SIM card currently used is a Mobile SIM card or a Unicom SIM card, the protocol stack makes a response to the network, and sends a response message to the network to inform the network side that the current network is not expected to fall back, so as to prevent the network side from constantly paging the mobile terminal of the user, resulting in signaling congestion of the network side. For example, after receiving the CS paging, the protocol stack of the mobile terminal will send a response message to the network side. The response message of Extended service request has a fallback flag bit of csfb_response. If the fallback flag bit is 1, the network side will schedule the CS paging to drop to 3G or 2G to continue paging. If the fallback flag bit is 0, the network side will not continue to schedule resources for the CS paging. As shown inFIG.3,4may indicate that the protocol stack sets the fallback flag bit of csfb_response in the message of Extended service request to 0, and sends the set message of Extended service request to the network side. After receiving the response message, the network side will not continue to schedule resources for the CS paging. At step S270, when receiving paging of the CS service in the current network from the network side, the protocol stack sends to the network side a notification message indicating that the current SIM card does not expect to continue to respond to the CS service, so that the network side discards the CS service according to the notification message, and controls the mobile terminal to stay in the game mode all the time. Step S270is the same as step S130, and will not be described in detail here. Embodiment Three FIG.4is a flow diagram of a method for game interference processing according to embodiment three of the present application. The method is applied to a mobile terminal, and a SIM card currently used in the mobile terminal can support a PS service and a CS service. The method includes the following steps. At step S310, the mobile terminal is switched to a game mode in response to a trigger operation of a user on a physical switch. The mobile terminal is provided with a physical switch for switching a current working mode of the mobile terminal to a game mode. The current working mode of the mobile terminal is switched to the game mode in response to the trigger operation of the user on the physical switch. The physical switch may be arranged on the side of the mobile terminal at a position convenient for triggering by the user's finger, so that the user can conveniently and quickly switch the current working mode of the mobile terminal to the game mode. At step S320, whether the SIM card in the mobile terminal is registered in the IMS network is determined. Step S320is the same as step S220, and will not be described in detail here. At step S330, whether the current SIM card is a Telecom SIM card is determined. Step S330is the same as step S230, and will not be described in detail here. At step S340, the protocol stack makes a response to a CS paging through VOLTE. Step S340is the same as step S240, and will not be described in detail here. At step S350, the protocol stack sets the SIM card to respond to a CS service only in the current network, and sends a response message of responding to the CS service only in the current network to the network side. Step S350is the same as step S250, and will not be described in detail here. At step S360, the protocol stack sets a fallback flag bit as an unexpected fallback flag, and sends the set response message to the network side. Step S360is the same as step S260, and will not be described in detail here. At step S370, when receiving paging of the CS service in the current network from the network side, the protocol stack sends to the network side a notification message indicating that the current SIM card does not expect to continue to respond to the CS service, so that the network side discards the CS service according to the notification message, and controls the mobile terminal to stay in the game mode all the time. Step S370is the same as step S270, and will not be described in detail here. Embodiment Four FIG.5is a flow diagram of a method for game interference processing according to embodiment four of the present application. The method is applied to a mobile terminal, and a SIM card currently used in the mobile terminal can support a PS service and a CS service. The method includes the following steps. At step S410, when the mobile terminal is in a game mode, whether a SIM card in the mobile terminal is registered in an IMS network is determined. If the SIM card is not registered in the IMS network, it proceeds to step S420; and if the SIM card is registered in the IMS network, it proceeds to step S450. At step S420, if the SIM card is registered in the IMS network, the protocol stack of the mobile terminal sends a response message of no expectation to fallback to a network side after receiving a CS paging from the network side. At step S430, when receiving paging of a CS service in the current network from the network side, the protocol stack sends to the network side a notification message indicating that the current SIM card does not expect to continue to respond to the CS service, so that the network side discards the CS service according to the notification message, and controls the mobile terminal to stay in the game mode all the time. At step S440, when receiving paging of other PS services other than games in the current network from the network side, an AP of the mobile terminal makes no response to other PS services. In the game mode, the SIM card currently used in the mobile terminal only supports the PS service, with the CS service being not supported. If the user is in the game mode, when the protocol stack receives other PS services except online games sent from the network side, the protocol stack makes a response to the PS services, and the AP of the mobile terminal does not display the PS service on the current game page to ensure the game experience. The other PS services may include video chat, voice chat, text message, etc., which require instant interaction with users. It is worth noting that step S440may also be presented together with step S430, or step S440may also be presented before step S430. The games described in all the embodiments are online games that need to be connected to the Internet. Embodiment Five FIG.6is a structural schematic diagram of an apparatus for game interference processing according to an embodiment of the present application. The apparatus is applied to a mobile terminal, and a SIM card currently used in the mobile terminal can support a PS service and a CS service. The apparatus for game interference processing500includes a determination module510, a sending module520and a discarding module530. The determination module510is configured to determine whether a SIM card in the mobile terminal is registered in an IMS network when the mobile terminal is in a game mode, where the game mode is a mode in which any online game is in a running state. The sending module520is configured to send a response message of no expectation to fallback to the network side using a protocol stack after receiving a CS paging from the network side, when the SIM card is not registered in the IMS network. The discarding module530is configured to send to the network side a notification message indicating that the current SIM card does not expect to continue to respond to a CS service using the protocol stack when receiving paging of the CS service in the current network from the network side, so that the network side discards the CS service according to the notification message, and control the mobile terminal to stay in the game mode all the time. Further, the apparatus for game interference processing500also includes a response module: The response module is configured to make a response to the CS paging through VOLTE using the protocol stack when the current network connected by the SIM card supports an IMS architecture. Further, the response message also includes a fallback flag bit; and the sending module520further includes a first fallback unit. The first fallback unit is configured to set the fallback flag bit as an unexpected fallback flag using the protocol stack when the SIM card is a Mobile SIM card or a Unicom SIM card, and send the set response message to the network side. Further, the sending module520also includes a second fallback unit. The second fallback unit is configured to set the SIM card to a mode of responding to the CS service only in the current network using the protocol stack when the SIM card is a Telecom SIM card, and send a response message of responding to the CS service only in the current network to the network side. Further, the apparatus for game interference processing500also includes a first input module. The first input module is configured to switch the mobile terminal to the game mode in response to an input operation of a user in a plurality of game mode options preset in the mobile terminal. Further, the mobile terminal is provided with a physical switch; and the apparatus for game interference processing500also includes a second input module. The second input module is configured to switch the mobile terminal to the game mode in response to a trigger operation of the user on the physical switch. Further, the apparatus for game interference processing500also includes a paging module. The paging module is configured to make an AP of the mobile terminal perform no response to other PS services when receiving paging of the other PS services except games in the current network from the network side. Embodiment Six FIG.7is a flow diagram of a method for interference processing according to embodiment six of the present application. The method for interference processing is applied to a mobile terminal, and a SIM card currently used in the mobile terminal can support a PS service and a CS service. The interference processing method includes the following steps. At step S610, when the mobile terminal is in a preset mode, whether a SIM card in the mobile terminal is registered in an IMS network is determined. The preset mode is a mode when a corresponding preset operation is executed, and the preset operation belongs to the PS service. The preset operation may include game, shopping, video and other operations. At step S620, when the SIM card is not registered in the IMS network, the protocol stack of the mobile terminal sends a response message of no expectation to fallback to the network side after receiving a CS paging from the network side. It is worth noting that it is also possible to determine whether the current network connected by the SIM card in the mobile terminal supports the IMS architecture after receiving the CS paging from the network side. At step S630, when receiving paging of a CS service in the current network from the network side, the protocol stack sends to the network side a notification message indicating that the current SIM card does not expect to continue to respond to the CS service, so that the network side discards the CS service according to the notification message, and controls the mobile terminal to stay in the preset mode all the time. According to the present application, a mobile terminal is also provided, which may include a smart phone, a tablet computer and the like. As shown inFIG.8, the mobile terminal100includes a Radio Frequency (RF) circuit110, a memory120, an input unit130, a display unit140, a shooting unit150, an audio circuit160, a wireless fidelity (WiFi) module170, a processor180, and a power supply190. The input unit130may include a touch panel and may include other input devices, and the display unit140may include a display panel. The components of the mobile terminal100will be described below with reference toFIG.8. The RF circuit110is used for receiving and transmitting wireless signals. The RF circuit110may be composed of a RF receiving circuit and a RF transmitting circuit. The RF circuit110may mainly include an antenna, a wireless switch, a receiving filter, a frequency synthesizer, high frequency amplifier, local oscillator for reception, frequency mixing, intermediate frequency, local oscillator for transmission, power amplifier control, power amplifier, etc. The memory120is used for storing a program that supports the processor180to execute a method for sending a long message according to the following embodiments. The memory120may mainly include a program storage area and a data storage area, where the program storage area may store an operating system and an application program required by at least one function (such as a message sending function, a mode setting function, an image playing function, etc.); and the data storage area may store data (such as short messages, audio data, phone books, etc.) created according to the use of mobile phones. In addition, the memory120may include a high-speed random access memory, and may also include a nonvolatile memory, such as at least one magnetic disk memory device, a flash memory device, or other volatile solid-state memory devices. The input unit130may be used for receiving input digital or character information and generating key signal input related to user settings and function control of the mobile terminal100. The input unit130may include a touch panel and other input devices. The touch panel, also known as a touch screen, can collect the user's touch operations on or near the touch panel (such as the user's operation on or near the touch panel with any suitable objects or accessories such as fingers and stylus, etc.), and drive corresponding connecting device according to the preset program. In some embodiments, the touch panel may include a touch detection device and a touch controller. The touch detection device detects the touch orientation of the user, detects the signal brought by the touch operation, and transmits the signal to the touch controller. The touch controller receives touch information from the touch detection device, converts the touch information into contact coordinates, and then sends the contact coordinates to the processor180. The touch controller can receive and execute commands sent from the processor180. In addition, the touch panel can be implemented by various types such as resistive, capacitive, infrared and surface acoustic wave. In addition to the touch panel, the input unit130may also include other input devices. The other input devices may include, but are not limited to, one or more of physical keyboard, function keys (such as volume control keys, switch keys, etc.), trackball, mouse, joystick, etc. The display unit140may be used for displaying information input by the user or provided to the user, as well as various menus and interfaces of the mobile terminal100, such as a game interface. The display unit140may include a display panel. In some embodiments, the display panel may be configured in the form of a liquid crystal display (LCD) or an organic light-emitting diode (OLED). Further, the touch panel may cover the display panel, and when the touch panel detects a touch operation on or near the touch panel, the touch operation is transmitted to the processor180to determine the type of touch event, and then the processor180provides corresponding visual output on the display panel according to the type of touch event. Although the touch panel and the display panel are two independent components to realize the input and output functions of the mobile phone, in some embodiments, the touch panel and the display panel can be integrated to realize the input and output functions of the mobile phone. The shooting unit150is used for collecting image information within an imaging range. The shooting unit150may be a camera, and the camera may include a photosensitive device including but not limited to CCD (Charge Coupled apparatus) and CMOS (Complementary Metal-Oxide Semiconductor). The photosensitive device converts light change information into electric charges, and converts the converted electric charges into digital signals through analog-to-digital conversion. The digital signals are stored in a flash memory or a built-in hard disk card inside the shooting unit150after being compressed. Therefore, the stored digital signals can be transmitted to the processor180, and the processor180processes the digital signals according to requirements or instructions. The audio circuit160may provide an audio interface between the user and the mobile terminal100. WiFi is a short-distance wireless transmission technology, and the mobile terminal100can help users to send and receive e-mails, browse web pages and access streaming media through the wireless fidelity module170(hereinafter referred to as WiFi module), which provides wireless broadband internet access for users. AlthoughFIG.1shows a WiFi module, it can be understood that it is not a necessary component of the mobile terminal100, and can be omitted as needed within the scope of not changing the essence of the present application. The processor180is the control center of the mobile terminal100, which connects various parts of the whole mobile terminal100by using various interfaces and lines, runs or executes software programs and/or modules stored in the memory120, and calls data stored in the memory120, so that the mobile terminal100can execute all the above-mentioned methods or functions of various modules in all the above-mentioned devices. Optionally, the processor180may include one or more processing units. Preferably, the processor180may be integrated with an application processor, which mainly processes an operating system, a user interface, an application program, and the like. The processor180may integrate modem processor, or the modem processor may not be integrated into the processor180. The power supply190can be logically connected to the processor180through the power management system, so that the functions of charge management, discharge management and power consumption management can be realized through the power management system. It can be understood by those having ordinary skill in the art that the structure of the mobile terminal100shown inFIG.8does not constitute a limitation of the mobile terminal, and may include more or fewer components than shown, or combine some components, or arrange different components. According to this embodiment, a computer storage medium is also provided, where the computer storage medium is used for storing the computer program used in the mobile terminal. According to the embodiments, a method and an apparatus for game interference processing, and a mobile terminal are provided. When the mobile terminal is in a game mode, if a protocol stack of the mobile terminal receives paging of a CS service from a network side, the protocol stack informs the network side of a message of no expectation to fallback, so that the network side can only page the CS service in the current network, thus avoiding the problem of game interruption caused by network speed drop after the network falls back. The game mode is defined in an innovative manner, when the game mode is started, the protocol stack only makes a response to PS service, but makes no response to CS service from the network side, avoiding the problem that users are interrupted by voice service in game state and improving user experience. The mobile terminal can be switched to the game mode through preset game mode options or physical keys, with one-click switching. When the mobile terminal is in a preset mode (such as games, shopping, video, etc.), if the protocol stack of the mobile terminal receives paging of the CS service from the network side, the protocol stack informs the network side of a message of no expectation to fallback, so that the network side can only page the CS service on the current network. When the protocol stack receives paging of the CS service from the network side, the protocol stack makes according to the predefined preset mode, the network side discard the CS service according to the notification message, and controls the mobile terminal to stay in the preset mode all the time to avoid fallback in the network. In several embodiments provided in this application, it should be understood that the disclosed apparatus and method may also be implemented in other ways. The above-described apparatus embodiments are only schematic. For example, the flowcharts and structural diagrams in the drawings show the architecture, functions and operations of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagram may represent a module, program segment or part of code containing one or more executable instructions for implementing specified logical functions. It should also be noted that in alternative implementations, the functions noted in the blocks may also occur in a different order from those noted in the drawings. For example, two consecutive blocks can actually be executed in substantially parallel, and sometimes be executed in reverse order, depending on the functions involved. It should also be noted that each block in the structural diagram and/or flowchart, and the combination of blocks in the structural diagram and/or flowchart, can be implemented by a dedicated hardware-based system that performs specified functions or actions, or can be implemented by a combination of dedicated hardware and computer instructions. In addition, the functional modules or units in each embodiment of this application can be integrated together to form an independent part, or each module can exist alone, or two or more modules can be integrated to form an independent part. When the functions are realized in the form of software function modules and sold or used as independent products, they can be stored in a computer readable storage medium. Based on this understanding, the technical scheme of this application can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions to make a computer device (which may be a smart phone, a personal computer, a server, or a network apparatus, etc.) execute all or part of the steps of the method described in each embodiment of this application. The aforementioned storage media include: U disk, mobile hard disk, ROM (Read-Only Memory), RAM (Random Access Memory), magnetic disk or optical disk, etc., which can store program codes. The above is only a specific implementation mode of this application, but the protection scope of this application is not limited to this. Any person familiar with the technical field can easily think of changes or substitutions within the technical scope disclosed in this application, which should be covered in the protection scope of this application.
37,260
11857871
DETAILED DESCRIPTION The following describes an embodiment of the invention in detail. The following embodiment is merely illustrative of the invention, and the invention should not be limited to the embodiment. Various modifications are possible in the invention, without departing from the scope of the invention. Moreover, a person skilled in the art can adopt any embodiment in which one or more elements described below are replaced with their equivalents, and such an embodiment is also included within the scope of the invention. The positional relationships such as up, down, left, and right shown according to need are based on the positional relationships shown in the drawings, unless stated otherwise. The dimensional ratios in the drawings are not limited to the ratios shown in the drawings. Though the following describes, as an example, an embodiment in which the invention is implemented using an information processing device for a game to facilitate understanding, this is not a limit for the invention as noted above. A. Embodiment A-1. Structure of Game System FIG.1is a schematic diagram (system block diagram) showing a preferred embodiment of a server device according to the present invention.FIG.2is a schematic diagram (system diagram) showing a preferred embodiment of a game system according to the invention. As shown inFIGS.1and2, a server device100is a server computer connected to a network200, and achieves a server function by a predetermined server program running in the server computer. Each terminal device2such as a client computer21or a mobile terminal22is wiredly and/or wirelessly connected to the network200, as with the server device100. The server device100and the terminal device2are set to be capable of communicating with each other, thus constituting a game system1. The network200is a communication line or a communication network for information processing including the Internet and the like. The specific structure of the network200is not particularly limited so long as it enables data transmission and reception between the server device100and the terminal device2. For example, the network200comprises a base station wirelessly connected to the terminal device2, a mobile communication network connected to the base station, the Internet connected to the server device100, and a gateway device for connecting the mobile communication network and the Internet. The server device100comprises an operation processing unit101such as a CPU or an MPU, a ROM102and a RAM103as storage devices, an external interface104connected with an input unit105and an external memory106, and an image processing unit107connected with a display monitor111. The server device100further comprises a slot drive108containing or connected with a disk, a memory device, and the like, an audio processing unit109connected with a speaker device112, and a network interface110, which are connected to each other via a transmission path120such as a system bus including an internal bus, an external bus, and an expansion bus as an example. Note that devices used for input/output such as the input unit105, the external memory106, the display monitor111, and the speaker device112may be omitted according to need and, even in the case of being included, need not be constantly connected to the transmission path120. The operation processing unit101controls the overall operation of the server device100, transmits and receives control signals and information signals (data) with the other components mentioned above, and also performs various operations necessary for game execution. The operation processing unit101is accordingly capable of performing, through the use of an arithmetic logic unit and the like, arithmetic operations such as addition, subtraction, multiplication, and division, logical operations such as logical addition, logical multiplication, and logical negation, bit operations such as bit addition, bit multiplication, bit inversion, bit shift, and bit rotation, and the like, on fast-accessible storage areas such as registers. The operation processing unit101is further capable of performing saturate operations, trigonometric function operations, vector operations, and the like, according to need. The ROM102stores an IPL (Initial Program Loader), which is typically executed immediately after power-on. By executing the IPL, the operation processing unit101reads, into the RAM103, a server program and a game program recorded in the disk or the memory device contained in or connected to the slot drive108, and executes the programs. The ROM102also stores an operating system program necessary for controlling the overall operation of the server device100and other various data. The RAM103is for temporary storage of the server program, the game program, and various data. The read server program and game program, data necessary for game progress and communication between a plurality of terminal devices2, and the like are held in the RAM103, as mentioned above. The operation processing unit101sets a variable area in the RAM103, and performs an operation directly on a value stored in the variable area using the arithmetic logic unit. The operation processing unit101also performs a process such as copying or moving a value stored in the RAM103to a register to store the value in the register and performing an operation directly on the register, and writing the operation result back to the RAM103. The input unit105which is connected via the external interface104receives various operation inputs by the user (game provider) of the server device100. The input unit105may be any of a keyboard, a touchpad, a touch panel, a voice input device, and the like. The device type is not particularly limited so long as various operation inputs and instruction inputs for a decision operation, a cancel operation, a menu display, and the like are possible. The RAM103or the external memory106which is removably connected via the external interface104stores data indicating the operation status of the server device100, the access status of each terminal device2, and the play status and progress state (past results, etc.) of the game in each terminal device2, data of communication logs (records) between the terminal devices2, and so on, in a rewritable form. The image processing unit107, after various data read from the slot drive108is processed by the operation processing unit101or the image processing unit107, stores the processed image information in a frame memory or the like. The image information stored in the frame memory is converted to a video signal at predetermined synchronization timing, and output to the display monitor111connected with the image processing unit107. This enables various image displays. Image information related to the game is transmitted from the image processing unit107and/or the operation processing unit101to each terminal device2, for example in cooperation with the operation processing unit101. The audio processing unit109converts various data read from the slot drive108to an audio signal, and outputs the audio signal from the speaker device112connected with the audio processing unit109. Audio information (sound effects, music information) related to the game is transmitted from the audio processing unit109and/or the operation processing unit101to each terminal device2, for example in cooperation with the operation processing unit101. The network interface110connects the server device100to the network200. For example, the network interface110conforms to a standard used for building a LAN, and includes: an analog modem, an ISDN modem, an ADSL modem, a cable modem for connecting to the Internet or the like using a cable television line, or the like; and an interface for connecting the modem to the operation processing unit101via the transmission path120. The following describes a preferred embodiment of a game (social game) executed by a game program according to the invention in the game system1and the server device100having the above-mentioned structures. A-2. Game Content FIG.3is a conceptual diagram for describing an overview of a battle league according to an embodiment of the invention.FIG.4is a conceptual diagram for describing a schedule of the battle league. As shown inFIG.3, the battle league is organized (an upper league U and a lower league L) according to performance in terms of strength (e.g. past battle records), and reorganized as a result of a promotion and relegation competition performed after a season competition. A player who participates in this game first joins a group called “guild” which is made up of a plurality of players. According to the schedule of the battle league (seeFIG.4), the guild to which the player belongs sequentially fights other guilds in the same league and, in some cases, fights other guilds in the other league in the promotion and relegation competition, thus seeking to win the crown of all guilds. This is described in detail below. Suppose the player belongs to a guild G3shown inFIG.3. The player sequentially fights other guilds G1, G2, and G4in the upper league U. For example, in the case where the guilds G3and G4are lower in winning rate than the guilds G1and G2in the upper league U at the end of the 1st season, the guild G3moves into the promotion and relegation competition together with the guild G4, and fights other guilds G5and G6having high winning rates in the lower league L. In the case where the guild G3to which the player belongs is lower in winning rate than the guilds G4and G5in the promotion and relegation competition, too, i.e. in the case where the guild G3cannot make the top two of the four guilds participating in the promotion and relegation competition, the guild G3will end up being in the lower league L in the 2nd season (seeFIG.3). In such a case, the guild G3fights other guilds G6, G7, and G8in the lower league L, aiming to return to the upper league U in the next season (i.e. the 3rd season). The schedule of the battle league is shown inFIG.4as an example. The “season competition” of fighting another guild in the same league is held on weekdays (Monday to Friday), and the “promotion and relegation competition” of fighting another guild across the leagues is held on weekends (Saturday and Sunday). The season competition and the promotion and relegation competition are each performed a plurality of times in predetermined time periods each day (see “day battle”, “evening battle”, and “night battle” shown inFIG.4). Especially in the promotion and relegation competition, the players belonging to each guild are likely to be even more motivated for their battle against another guild than in the season competition, because they face either a chance of being promoted to the upper league or a risk of being relegated to the lower league. Accordingly, the days (i.e. weekends) on which each player is likely to be more easily participate in the game are assigned to the promotion and relegation competition in this embodiment, as shown inFIG.4. Note that the schedule of the promotion and relegation competition and the season competition, the time period assigned to each battle, and the like may be appropriately set or changed according to the game design and the age group of players. The league organization (the number of leagues, the maximum number of guilds per league, etc.) may also be appropriately set or changed by, for example, a game operator managing the game. The following describes a method of game play by each player and a method of game progress-related control by the server device100, with reference toFIGS.1to5and the like. A-3. Game Procedure First, the player operates the terminal device2(the client computer21or the mobile terminal22, e.g. a tablet terminal or a smartphone), to connect the terminal device2to the server device100via the network200such as the Internet. The player then operates the terminal device2to select the game provided by the server device100or, in a platform screen prior to game selection, inputs login information such as an ID number and a password. Having recognized the login information, the operation processing unit101in the server device100displays the player's unique My Page screen or My Home screen associated with the ID number, on the terminal device2. Depending on the game type, a banner listing a plurality of scenes (e.g. locations, dungeons, quests, etc.) set as game scenes is displayed in the My Page screen. The scenes such as locations, dungeons, and quests may be mutually or individually hierarchized. Moreover, in this game, the operation processing unit101in the server device100displays a menu screen about the above-mentioned “guild” which is a group of players, in the My Page screen or the My Home screen. A player who has played the game or participated in the game basically belongs to a predetermined guild. This information is stored in a storage unit such as the ROM102, in association with specific information such as the ID information of the player. In other words, guild-player correspondence information indicating correspondence relations between guilds and players and the like are stored in the storage unit. Based on this information, the operation processing unit101displays information of the guild to which the player belongs and, if necessary, an edit menu and the like for the guild, on the terminal device2. On the other hand, a player (i.e. a new player) who plays the game or participates in the game for the first time basically does not belong to any specific guild. The operation processing unit101accordingly displays a menu screen for searching for (finding) a guild or creating (establishing) a new guild, on the terminal device2of the new player not belonging to any guild. The new player can decide or create a guild to which he or she belongs, by operating the terminal device2according to instructions in pull-down menus and the like sequentially displayed from the menu screen. Alternatively, the new player or the like may proceed with the game without joining any guild and, during this, may be invited by an existing guild. In such a case, an invitation message is displayed on the terminal device2of the new player. The new player can join the inviting guild, by operating the terminal device2according to the message displayed on the display screen. When the guild to which the player belongs is decided or selected or when necessary, a list of a plurality of games or events is displayed on the terminal device2. When the player selects to participate in the game, the screen of the game held at the time is displayed on the terminal device2of the player. Thus, the player can freely participate in the game held at the time. Here, a preparation screen or an introduction screen of various games may be displayed on the terminal device2of the player, as a still image or a moving image (e.g. Flash). While the guild battle league is being held, the execution of the guild battle is controlled by the server device100according to the schedule shown inFIG.4, as mentioned earlier. The following describes an example of display on the terminal device2of each player when the guild battle between the guilds G1and G2is performed, with reference toFIG.5. A-4. Guild Battle When the guild battle is performed, a field F and a palette P are displayed as game image display areas on a screen2aof the terminal device2of each player, and elements are displayed in each display area. In an example shown inFIG.5, player characters A1to A6belonging to the guild G1(group) and player characters B1to B6belonging to the guild G2(group) are displayed in the field F, as part of the elements. In the guild battle, the player characters A1to A6belonging to the guild G1and the player characters B1to B6belonging to the guild G2are each unified in making an attack and the like on the opponent guild at arbitrary timing, and fight in the form of competing in total points earned (points earned by damaging the opponent, etc.). Offense, defense, and the like against the opponent guild are conducted as follows. First, each player belonging to the offensive guild (e.g. the guild G1shown inFIG.5) sequentially selects (turns over) cards from a deck D in the palette P. The player thus attacks the player characters B1to B6of the opponent guild G2according to the combination of skills, attack values, specific items, defense values, and the like shown in cards31,32, and33and their attributes, rarity, and the like. Damage inflicted on the opponent and damage inflicted on the player character are then calculated. For example, in the case where the cards31,32, and33all match in type, attribute, or rarity or constitute a specific combination, the offensive power or defensive power of the player characters A1to A6may be increased. In the case where the guild G2is the offensive guild, on the other hand, the same display is made on the screen2aof the terminal device2of each player belonging to the offensive guild G2, and the player attacks the player characters A1to A6of the opponent guild G1. During the guild battle, active items (e.g. a drug for recovery from damage, not shown in the drawing) and the like used in each of the guilds G1and G2are also displayed in the event field F on the screen2a. Further, HP gauges61and63are displayed respectively for the guilds G1and G2, and the outcome of the battle is determined according to the remaining amount of each of the HP gauges61and63. Thus, in the guild battle, the players belonging to each of the guilds G1and G2are unified in repeated offense and defense and compete fiercely against their opponents, with it being possible to feel the fun and thrill of the battle game. However, in the case where the opponent guild does not participate in the battle (that is, the opponent guild does not take any attack action despite the time of the guild battle being reached), the players belonging to the offensive guild might find the game not interesting at all and lose their motivation to participate in or continue the game, as mentioned earlier (see the SUMMARY OF THE INVENTION section). In view of this, in embodiments of the invention, the server device100recognizes the activity status of each guild joining the battle league, and forcibly disbands any guild whose activity is not detected for a predetermined period, thereby enabling all players participating in the guild battle to feel the fun and thrill of the guild battle. The following describes a specific method of achieving such guild forcible disbandment, with reference toFIG.6and the like. A guild forcible disbandment function shown inFIG.6is realized by various hardware resources such as the operation processing unit101in the server device100operating with a game-related program and the like stored in a storage medium (storage unit) such as the ROM102, the RAM103, or the external memory106. A-5. Guild Forcible Disbandment Function A status determination unit1400recognizes the activity status of each guild joining the battle league successively (or on a regular basis), using a timer1300and the like. Here, “recognize the activity status of a guild” means to recognize whether the guild is in an active state or an inactive state. In the case where the guild is found not participating in the guild battle, the guild is determined to be in the inactive state. Otherwise, the guild is determined to be in the active state, on the ground that the guild is found participating in the guild battle. Examples of a criterion for determining whether the guild is in the active state or the inactive state (in other words, whether or not the guild is in the inactive state) are as follows.(1) The guild does not participate in the guild battle at all for a predetermined period (e.g. one week).(2) No login is performed by any player belonging to the guild for a predetermined period (e.g. one week).(3) The number of players belonging to the guild is continuously below a minimum number (e.g. 2) of players necessary to establish the guild battle for a predetermined period (e.g. 5 days). Such a criterion for determining whether or not the guild is in the inactive state is stored in a storage unit1200as criterion information. For instance, in the case where the above-mentioned criterion (2) “no login is performed by any player belonging to the guild for a predetermined period (which is assumed to be one week in the following example)” is stored in the storage unit1200as the criterion information, the status determination unit1400detects the login state of each player belonging to each guild, while referring to the timer1300. The status determination unit1400then determines, for each guild, whether the guild is in the inactive state or the active state, based on the result of detecting the login state of each player. FIG.7is a schematic diagram showing a situation where guilds in the inactive state (hereafter referred to as “inactive guilds”) and guilds in the active state (hereafter referred to as “active guilds”) are included in the same league. Upon determining that the state in which no login is performed by any player belonging to each of the guilds G1to G3continues for one week from among the guilds G1to G6in the league, the status determination unit1400determines the guilds G1to G3as inactive guilds. Meanwhile, upon detecting that login is performed by any player belonging to each of the guilds G4to G6at least once during one week, the status determination unit1400determines the guilds G4to G6as active guilds. The status determination unit1400notifies a guild integrated control unit1100of the activity status of each guild detected in this way, as guild activity information. The above-mentioned criterion is merely an example, and the criterion for determining whether or not the guild is in the inactive state may be set or changed according to the game design and the like. For instance, in the case where the state in which the guild participates in the guild battle but the number of attacks made by the guild during the guild battle is extremely small (e.g. the number of actual attacks is 1 despite the number of possible attacks in the battle period being 10 or more) continuously occurs a plurality of times, the opponent guild might lose its motivation for the battle, as in the above-mentioned case. Such a situation may be used as the criterion for determining whether or not the guild is in the inactive state. Besides, the above-mentioned criteria (1) to (3) and other criteria may be combined according to need (e.g. the criteria (2) and (3) may be combined). Referring back toFIG.6, the guild integrated control unit1100has a role of forcibly disbanding any guild (i.e. inactive guild) determined to be in the inactive state, based on the guild activity information notified from the status determination unit1400. The guild integrated control unit1100comprises a guild management unit1110, a player specification unit1120, and a disbandment notification unit1130. The guild management unit1110manages guild-player correspondence information indicating correspondence relations between guilds (e.g. guild identification IDs for identifying guilds) and players (e.g. ID numbers for specifying players), for all guilds joining the battle league. The guild management unit1110also specifies any inactive guild based on the guild activity information notified from the status determination unit1400, and performs a procedure for forcibly disbanding the inactive guild. For example, the above-mentioned guild identification ID of the inactive guild is initialized to disable its function as a guild. As a result, the guilds G1to G3determined as inactive guilds are promptly deleted from the league and only the active guilds G4to G6remain in the league, as shown inFIG.8as an example. The timing of forcible disbandment is desirably when the battle in the league is not directly affected, i.e. when no guild battle is performed. Though the number of guilds in the league is reduced as a result of the forcible disbandment (“6”→“3” in the example shown inFIG.8), the league competition continues without new guilds being added to the league. Note that whether or not to add new guilds and at which timing the inactive guilds are forcibly disbanded may be arbitrarily set or changed. The player specification unit1120specifies the players belonging to the inactive guild forcibly disbanded by the guild management unit1110(i.e. the players belonging to the inactive guild until the disbandment), by referring to the guild-player correspondence information. How to specify the players belonging to the inactive guild may be arbitrarily set or changed according to the game design and the like. In the case where there are a plurality of inactive guilds as shown inFIG.7, the player specification unit1120specifies the players for each inactive guild. The disbandment notification unit1130notifies the players specified by the player specification unit1120as belonging to the inactive guild, of the disbandment of the guild to which the players belong. In detail, the disbandment notification unit1130acquires destination information (e.g. mail address) of the terminal device2of each player to be notified of the disbandment, from the guild management unit1110. The disbandment notification unit1130transmits a disbandment message (for example, seeFIG.9) indicating the disbandment of the belonging guild to the terminal device2of each player, using the acquired destination information. As shown inFIG.9, the disbandment message is a message that reflects the world of the game, and is displayed when each user performs login to access his or her My Home screen after the guild disbandment. At this timing, a menu related to the guild battle displayed on the display screen of each player is grayed out. By reading the disbandment message, each player recognizes that the guild to which he or she belongs has been forcibly disbanded and the player has become an independent player not belonging to any guild. Though this embodiment describes an example of notifying each player belonging to the guild after the guild is disbanded, the timing of notification is not limited to this. For instance, each player belonging to a guild that is likely to be disbanded may be notified by a warning message indicating that the guild is likely to be forcibly disbanded, before the guild is actually disbanded. In the case where the criterion is set to forcibly disband the guild when “no login is performed by any player belonging to the guild for one week”, for example, each player belonging to the guild may be notified by the above-mentioned warning message when no login is detected from any player belonging to the guild for 5 days in a row. Each player belonging to the guild can thus learn that the guild is in danger of forcible disbandment. Here, the number of times the notification is made is not limited to one, and the warning message may be transmitted several times (e.g. after 3 days, after 5 days, and prior to the day of forcible disbandment) until the guild is actually forcibly disbanded. By forcibly disbanding the guild whose activity is not detected for the predetermined period in this way, it is possible to prevent the present problem, i.e. the problem in that the players cannot feel the fun and thrill of the guild battle and lose their motivation to participate in or continue the game because, even after the start time of the guild battle is reached, there is no opponent guild or the opponent guild makes no attack. The reason why a guild needs to be forcibly disbanded lies in that the guild is made up of players whose motivation for the guild battle is low. To solve this fundamental problem, the inventors propose to automatically form a guild by players whose motivation for the guild battle is expected to be high (i.e. players who are expected to be active to a certain extent as a guild). The following describes a specific method of achieving such guild automatic formation, with reference toFIG.10and the like. A guild automatic formation function shown inFIG.10is realized by various hardware resources such as the operation processing unit101in the server device100operating with a game-related program and the like stored in a storage medium (storage unit) such as the ROM102, the RAM103, or the external memory106. A-6. Guild Automatic Formation Function A player candidate determination unit2100has a role of detecting the activity status of an independent player not belonging to any group and determining whether or not to acknowledge the independent player as a candidate for a player belonging to a new guild. There are two types of independent players: (1) a player who has just started the game and so does not belong to any guild yet; and (2) a player whose belonging guild has been lost by forcible disbandment and so does not belong to any guild. In the following description, an independent player of the type (1) is referred to as “new player”, and an independent player of the type (2) is referred to as “returning player”, for convenience's sake. The player candidate determination unit2100comprises a first determination unit2110for determining whether or not to acknowledge the new player as a candidate for a player belonging to a new guild, and a second determination unit2120for determining whether or not to acknowledge the returning player as a candidate for a player belonging to a new guild. A storage unit2200stores a criterion (hereafter referred to as “first player determination information”) for acknowledging the new player as a candidate for a player belonging to a new guild and a criterion (hereafter referred to as “second player determination information”) for acknowledging the returning player as a candidate for a player belonging to a new guild. The first player determination information is information defined by an absolute position (i.e. “progress” of the game) from the start of the game, such as “when area3in the game is passed”. The second player determination information is information not defined by the “progress” of the game like the first player determination information but defined by “score” accumulated when playing the game, such as “when the score in the game exceeds 300 points”. The second player determination information is defined not by the “progress” of the game but by the “score” of the game, because the game progress varies among returning users unlike new users (e.g. one returning user has already advanced to area10while another returning user has only advanced to area2) and so it is difficult to define the determination information by the absolute position from the start position. Though different parameters (the “progress” and “score” of the game) are used for the first player determination information and the second player determination information in this embodiment, the same parameter (e.g. only the “score” or only the “progress”) may be used. Moreover, the determination information may be generated by using any other parameter (e.g. the number of items acquired) or combining a plurality of parameters (e.g. the “score” and the “progress”). FIG.11is a schematic diagram for describing the functions of the first determination unit2110and the second determination unit2120. The first determination unit2110successively monitors the game progress of the new player, and compares the game progress (activity status) of the new player with the first player determination information stored in the storage unit2200. The first determination unit2110detects whether or not the game progress clears the criterion indicated by the first player determination information, in detail, whether or not the new player advances to a predetermined stage (e.g. whether or not area3in the game is passed). Upon detecting that the game progress clears the criterion indicated by the first player determination information (see C1shown inFIG.11), the first determination unit2110acknowledges the new player as a candidate (“formation wait new player” shown inFIG.11) for a player belonging to a new guild, and notifies a new guild creation unit2300of the acknowledgement. The second determination unit2120successively monitors the game score of the returning player, and compares the game score (activity status) of the returning player with the second player determination information stored in the storage unit2200. The second determination unit2120detects whether or not the game score clears the criterion indicated by the second player determination information, in detail, whether or not the returning player exceeds a predetermined score (e.g. whether or not the score in the game exceeds 300 points). Upon detecting that the game score clears the criterion indicated by the second player determination information (see C2shown inFIG.11), the second determination unit2120acknowledges the returning player as a candidate (“formation wait returning player” shown inFIG.11) for a player belonging to a new guild, and notifies the new guild creation unit2300of the acknowledgement. The new guild creation unit2300comprises a first creation unit2310and a second creation unit2320, and has a role of automatically forming a new guild in the case where formation wait new players or formation wait returning players satisfy a setting condition. FIG.12is a schematic diagram for describing a new guild automatic formation function by the new guild creation unit2300. The first creation unit2310comprises a formation wait new player list L1for recognizing the status of formation wait new players (e.g. the number of players) (see D1shown inFIG.12). Upon being notified of a formation wait new player by the first determination unit2110, the first creation unit2310adds the formation wait new player to the new player list L1according to the notification. A condition (setting condition) for automatically forming a new guild, which is set by the game operator or the like, is registered in the first creation unit2310. An example of the setting condition is to automatically form the new guild in the case where the number of formation wait new players reaches a specified number (e.g. 6). The first creation unit2310checks the number of formation wait new players registered in the new player list L1at predetermined time intervals (or successively). Upon detecting that the number of formation wait new players registered in the new player list L1is greater than or equal to the specified number (6) (see D2shown inFIG.12), the first creation unit2310forms a new guild NG1(see D3shown inFIG.12). The first creation unit2310then notifies a new guild notification unit2400of the automatic formation of the new guild NG1by the formation wait new players. The second creation unit2320comprises a formation wait returning player list L2for recognizing the status of formation wait returning players (e.g. the number of players). A condition (setting condition) for automatically forming a new guild, which is set by the game operator or the like, is registered in the second creation unit2320. In the same manner as the first creation unit2310, upon detecting that the number of formation wait returning players registered in the returning player list L2is greater than or equal to the specified number (6) (see D2shown inFIG.12), the second creation unit2320forms a new guild NG2(see D3shown inFIG.12). The second creation unit2320then notifies the new guild notification unit2400of the automatic formation of the new guild NG2by the formation wait returning players. Though this embodiment assumes the case where the setting condition (“in the case where the number of formation wait new players is greater than or equal to the specified number”) in the first creation unit2310and the setting condition (“in the case where the number of formation wait returning players is greater than or equal to the specified number”) in the second creation unit2320are common, how these conditions are set may be appropriately changed according to the game design and the like. For example, not only the number of players but also another condition (such as a condition relating to players' experience values or possessed items) may be set in one or both of the creation units. Though this embodiment separates formation wait new players and formation wait returning players, they need not necessarily be separated. For instance, in the case where a total number of formation wait new players and formation wait returning players is greater than or equal to the specified number (e.g. the number of formation wait new players is 4 and the number of formation wait returning players is 2), a new guild including these two types of formation wait players may be automatically formed. In the case where there are a large number of formation wait players, a new guild may be automatically formed by randomly extracting formation wait players. Alternatively, a new guild may be automatically formed by player voting after a predetermined grace period. The new guild notification unit2400, upon being notified of the automatic formation of the new guild by the new guild creation unit2300, notifies each player belonging to the new guild. In detail, the new guild notification unit2400acquires destination information (e.g. mail address) of the terminal device2of each player to be notified, from the storage unit2200or the like. The new guild notification unit2400transmits, to the terminal device2of each player, a formation message (for example, seeFIG.13) indicating that the new guild has been formed and the player belongs to the new guild, using the acquired destination information. As shown inFIG.13, the formation message is a message that reflects the world of the game, and is displayed when each user performs login to access his or her My Home screen after the formation of the new guild. At this timing, a menu related to the guild battle displayed on the display screen of each player becomes active. By reading the formation message, each player recognizes that he or she has become a player belonging to the new guild, and can subsequently participate in the guild battle as a player belonging to the new guild. Thus, in this embodiment, a guild is automatically formed only by players whose motivation for the guild battle is expected to be high (i.e. players who are expected to be active to a certain extent as a guild). This can prevent the problem in that, even though a new guild is formed, players belonging to the new guild do not participate in the guild battle and as a result each player belonging to its opponent guild loses his or her motivation to participate in or continue the game. Moreover, even when a guild to which a player belongs is forcibly disbanded for some reason, by showing his or her motivation to participate in the game (e.g. the score exceeds a predetermined score), the player can participate in the guild battle again as a member of a newly formed guild. Hence, even a player whose guild is forcibly disbanded can feel the fun of the game without losing his or her motivation to participate in the game. The invention is not limited to the foregoing embodiment and modifications, and various other modifications are possible without departing from the scope of the invention, as noted above. For example, the structure of the server device100shown inFIG.1is also applicable to each of the client computer21and the mobile terminal22as the terminal device2, though they differ in throughput and the like. Conversely, the client computer21or the mobile terminal22may be used as the server device100. That is, any computer device connected via the network200can function as the server device. Here, instead of realizing all functions of the server device100shown inFIG.1by the terminal device2, such an application (hybrid application) that realizes part of the functions of the server device100by the terminal device2may be implemented as an example. In the server device100, a mass-storage device such as a hard disk or an SSD may be used to serve the same functions as a non-transitory recording medium such as the ROM102, the RAM103, the external memory106, the memory device loaded in the slot drive108, or the like. The storage device may or may not be subjected to redundancy by RAID or the like. Moreover, the storage device may not necessarily be connected to the operation processing unit101via the transmission path120, and may be connected to, for example, another external device via the network200in cloud computing. The network interface in each of the server device100and the terminal device2may be any of a wireless LAN device and a wired LAN device, which may be included inside or be an external device such as a LAN card. The terminal device2may be a game machine connectable to the network200. Alternatively, the terminal device2may be an online karaoke machine. The game settings in the guild battle are not limited to the specific example in the embodiment described above. For example, the guild battle is not limited to a battle between two guilds (e.g. the guilds G1and G2), and may be a battle between three or more guilds. As described above, a game control method, a server device, a game system, and a program according to the invention can significantly increase the fun, amusement, and thrill of a battle event to make the battle event and the whole game more active and enhance players' motivation to participate in or continue the game. Therefore, the invention can be widely and effectively used for all games (in particular, games having the element of social game) distributed, provided, and implemented especially in server-client network structures, all software- and hardware-related techniques for distribution, provision, and implementation of the games, and activities such as design, manufacture, and sales thereof. DESCRIPTION OF REFERENCE NUMERALS 1: game system2: terminal device2a: screen21: client computer (terminal device)22: mobile terminal (terminal device)100: server device101: operation processing unit102: ROM103: RAM104: external interface105: input unit106: external memory107: image processing unit108: slot drive109: audio processing unit110: network interface111: display monitor112: speaker device120: transmission path200: network (communication line)1100: guild integrated control unit (group integrated control step, group integrated control unit)1110: guild management unit1120: player specification unit (specification step, specification unit)1130: disbandment notification unit (notification step, notification unit)1200: storage unit (storage unit)1300: timer (determination step, determination unit)1400: status determination unit (determination step, determination unit)2100: player determination unit (determination step, determination unit)2110: first determination unit2120: second determination unit2200: storage unit (storage unit)2300: new guild creation unit (creation step, creation unit)2310: first creation unit2320: second creation unit2400: new guild notification unit
43,267
11857872
DETAILED DESCRIPTION Systems and methods are disclosed related to data center distribution and forwarding for cloud computing applications. The systems and methods described herein may be implemented for increasing application performance for any application type, such as, without limitation, streaming, cloud gaming, cloud virtual reality (VR), remote workstation applications, and/or other application types. For example, applications may be sensitive to various network performance parameters such as latency, packet loss, bandwidth, and/or jitter and/or application performance parameters such as session yield and/or application quality of service (QoS) metrics. As such, the system and methods described herein may be implemented in any system to increase the network and application performance for any type of application executing over a network(s)—such as the Internet. In addition, although the border gateway protocol (BGP) is primarily described herein as the protocol to which routing updates or policies are directed to, this is not intended to be limiting, and any suitable protocol may be used—e.g., the routing information protocol (RIP), secure BGP (sBGP), secure origin BGP (soBGP), etc. Infrastructure for high performance applications—such as game streaming, cloud-based VR, etc.—is often distributed among a higher number of data centers such that at least one data center is closer to a user population. By distributing the data centers in this way, the amount of resources in any single data center may be limited and, to effectively accommodate all users, the system of the present disclosure optimizes the distribution and allocation of these resources. For example, network characteristics from the perspective of a user device towards data centers may be measured for the particular application. This measurement may include latency, jitter, packet loss, bandwidth, and/or other network measurements. The measurement may be performed as an application-specific or custom network test that simulates network traffic for the particular application between and among one or more data centers and the user device. As an example, in a cloud game streaming environment, the simulated traffic may be bursty (characterized by brief periods of intermittent, high volume data transmission interspersed among periods with little to no data exchanged), and a traditional bandwidth test may return inaccurate results as to the ability of the user's network to handle the bandwidth requirements of a game streaming application. As such, a customized network test may be performed to determine the ability of the user's network and/or device to handle the network traffic of a game stream for a particular game. In addition, with a heavily distributed infrastructure, it is likely that more than one data center will have acceptable network characteristics—e.g., latency—so the user device may obtain information (e.g., via an application programming interface (API)) about each of the available data centers. Once the available data centers are determined, a preliminary network test may be performed for determining a subset of the data centers—or regions of data centers—that have acceptable preliminary network characteristics (e.g., latency characteristics). A single region, or data center thereof, may then be selected and the more detailed and customized network test, described herein, may be performed using the selected data center. This test may be used to generate a streaming profile for the user device and the network of the user (e.g., a local network)—e.g., including bandwidth capabilities, image quality or resolution information, bit rate information, audio quality of resolution information, device type (e.g., smartphone, tablet, high-performance computer, etc.). The streaming profile may be generated and/or updated a first time an application is executed on the user device, when changes to network characteristics are identified (e.g., new data centers come on line, a device type changes, a new user network is detected, etc.), periodically, at an interval, etc. Once a streaming profile is generated for a user device, when an application session is started on the user device, a preliminary network test (e.g., a latency test) may be executed. As a result, one or more data centers may be returned that have suitable latency, and a request for hosting the application session may be sent to a suitable data center (e.g., the data center with the lowest latency). A scheduler of the data center may receive this request—including data representing the streaming profile, data representing other suitable data centers determined from the preliminary network test, etc.—and may determine if the data center is able to host the application session (e.g., a game instance of a cloud gaming application). For example, the streaming profile may indicate the quality of the application stream(s) that are supported and requested by the user device (e.g., 4K, 8K, etc.) and the network characteristics (e.g., latency, jitter, packet loss, bandwidth, or other characteristics). In addition, requirements related to the particular application (e.g., a particular game type) may be analyzed to determine if the data center can host the game at the quality of service (QoS) level desired. As such, with these criteria in mind, the scheduler may determine that the data center can or cannot host the game. Where the data center cannot host the game—e.g., due to an incorrect hardware configuration, due to network issues, due to congestion, due to expected congestion, to maximize resources, etc.—the scheduler may forward the request to one or more other suitable data centers (e.g., all at the same time, one at a time, etc.). The forwarded request may include information from the original request of the user device, in addition to supplemental information such as the hardware requirements, the performance requirements of the particular application (e.g., the particular game), etc. The requests may be forwarded until another data center indicates acceptance of the request. Once accepted, the scheduler of the original data center may send a response to the user device indicating (e.g., via an IP address) the data center that will host the application session. The application session may then begin with the assigned data center hosting the application session for the user device. For the selected data center, additional criteria may be taken into account, in embodiments. For example, the data center may be a multihomed data center with two or more ISPs providing Internet access. The data center may then analyze application-specific (and/or instance-specific) QoS metrics of the application session across two or more of the ISPs, and determine the ISP that provides the best QoS. If the best performing ISP is different from the current ISP (or any other ISP has a higher QoS above a threshold difference), the data center may update import route maps and/or export route maps for a network device(s)(e.g., a core switch(es)) of the data center to control routing of the network traffic through the desired ISP. For example, some user ISPs may not work well with ISPs of the data center, and the ISP switching methods described herein may account for these issues to increase end-to-end network performance. By matching network characteristics for a user device with latency requirements for a particular application type (e.g., a particular game), application sessions may be forwarded or distributed to different data centers without degrading performance. In addition, due to forwarding without degradation, congestion control may be implemented to switch away from data centers forecasted to have more usage (e.g., based on historical data), or to switch away to data centers that may be further away—and have higher latency—but that still match latency requirements for a particular application. As a result, data centers may be reserved for users who are more closely located and/or executing more latency sensitive applications. The systems and methods described herein may provide effective and efficient use of distributed infrastructures, and avoid congestion and hot-spots, while providing an optimized application experience for end users. With reference toFIG.1,FIG.1is an example application session distribution and forwarding system100(alternatively referred to herein as “system100”), in accordance with some embodiments of the present disclosure. It should be understood that this and other arrangements described herein are set forth only as examples. Other arrangements and elements (e.g., machines, interfaces, functions, orders, groupings of functions, etc.) may be used in addition to or instead of those shown, and some elements may be omitted altogether. Further, many of the elements described herein are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. Various functions described herein as being performed by entities may be carried out by hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory. In some embodiments, components, features, and/or functionality of the system100may be similar to that of example game streaming system500and/or example computing device600. The system100may include one or more data centers102(e.g., data centers102A,102B, and102C) and/or one or more client devices104that communicate over the Internet106via one or more internet service providers (ISPs), such as data center (DC) ISPs110A and110B, and/or client ISP(s)108. In some embodiments, the system100may correspond to a cloud computing and/or a distributed computing environment. For example, where the host application112corresponds to a cloud gaming application, the example game streaming system500ofFIG.5may include one suitable architecture or platform for supporting the cloud gaming application. The DC ISPs110may provide access to the Internet106(and/or other WAN) for the data centers102and the client ISP(s)108may provide access to the internet for the client device(s)104. In some embodiments, one or more of the DC ISP110and/or the client ISP108may be a same ISP, while in other embodiments, one or more of the DC ISPs110and108may differ. In addition, where more than one data center102is implemented, different data centers102may use different DC ISPs110and/or where more than one client device(s)104is implemented, different client devices104may use different client ISPs108. Although referred to as the Internet106, this is not intended to be limiting, and the system100may be implemented for any network types, such as wide area networks (WANs), local area networks (LANs), cellular networks, other network types, or a combination thereof. Although the data centers102are illustrated as being multihomed—e.g., having two DC ISPs110A and110B—this is not intended to be limiting and, in some embodiments, one or more of the data centers102may not be multihomed. In addition, although illustrated as including only one client ISP108, this is not intended to be limiting, and the client device(s)104may include more than one client ISP108. In addition, although only a single link through each ISP is illustrated, this is not intended to be limiting, and in some embodiments an individual ISP—such as the DC ISP110A—may include a plurality of separate routes or edge router access points or nodes for the data centers102. For example, in some embodiments, when switching from one ISP to another, this may correspond to switching from a first route (e.g., via a first edge router of the ISP) through an ISP to a second route (e.g., via a second edge router of the ISP) through the same ISP. The data centers102may host a host application112—e.g., a high performance application, a cloud game streaming application, a virtual reality (VR) content streaming application, a content streaming application, a remote desktop application, etc.—using one or more application programming interface (APIs), for example. The data centers102may include any number of sub-devices such as servers, network attached storage (NAS), APIs, other backend devices, and/or another type of sub-device. For example, the data centers102may include a plurality of computing devices (e.g., servers, storage, etc.) that may include or correspond to some or all of the components of the example computing device600ofFIG.6, described herein. In some embodiments, the host application112may execute using one or more graphics processing units (GPUs) and/or virtual GPUs to support a client application122executing on the client device(s)104. In some embodiments, at least some of the processing of the data centers102may be executed in parallel using one or more parallel processing units, such as GPUs, cores thereof (e.g., CUDA cores), application specific integrated circuits (ASICs), vector processors, massively parallel processors, symmetric multiprocessors, etc. In embodiments where rendering is executed using the data centers102, the data centers102may implement one or more ray-tracing and/or path-tracing techniques to increase the quality of images and/or video in a stream (e.g., where the client device104is capable of displaying high-definition—e.g., 4K, 8K, etc.—graphics, and/or the network characteristics currently support streaming of the same). The data centers102may include one or more network devices120—e.g., switches, routers, gateways, hubs, bridges, access points, etc.—that may be configured to direct traffic internal to a network of the data centers102, direct incoming or ingress traffic from the Internet, direct outgoing or egress traffic to the Internet, and/or control, at least in part, routing of the network traffic through various autonomous systems of the Internet (e.g., via edge routers of the autonomous systems using the BGP protocol). For example, to direct ingress traffic from the Internet and/or egress traffic to the Internet, one or more core switches may be implemented to serve as a gateway to the Internet (and/or another WAN). The core switches may include import route maps (e.g., for egress network traffic) and/or export route maps (e.g., for ingress network traffic) that may be configured to aid in routing the network traffic coming to the data centers102and/or leaving from the data centers102. In addition, the routing policies of the core switches—or other network devices120—may include local preference values for particular egress ports and/or ingress ports that may be used by the system100to route traffic along a particular path (e.g., via a preferred DC ISP110). In addition, although the network devices120primarily described herein are core switches, this is not intended to be limiting, and the techniques described herein for the core switches may be additional or alternatively implemented for other types of network devices120without departing from the scope of the present disclosure—such as distribution switches, edge switches, routers, access points, core layer devices, distribution layer devices, access layer devices, etc. In some embodiments, network configurator(s)118may be executed or deployed on the core switches directly—e.g., where the core switches or other network device(s)120support containerized applications. As described herein, in some embodiments, once an application session has been initiated between the client device104and the data center102—e.g., via the client application122and the host application112—a quality of service (QoS) monitor116may monitor the quality of service of the application session over two or more DC ISPs110, and may use the network configurator(s)118and the network device(s)120to direct routing across a selected DC ISP110(e.g., the DC ISP110with the best quality of service). In some examples, this may include switching from a current DC ISP110to a different or alternate DC ISP110to increase the QoS of the application session. In order to accomplish this, in some embodiments, internal policies of the network device(s)120may be updated to favor a particular DC ISP110—e.g., by updating import route maps for BGP. In some examples, in addition to or alternatively from updating import route maps, export route maps may be updated—e.g., by prepending autonomous system prefixes to the BGP headers of packets—to influence ingress traffic from the client device(s)104to also be transmitted via a desired DC ISP110. In some embodiments network protocol attributes may be changed in host application112to prefer one of the DC ISPs110. For example, an attribute of the IP network protocol (e.g., a differentiated services code point (DSCP) field) may be changed which may cause traffic to be routed over a specific egress ISP. In such examples, network routing may be configured through policy based routing (PBR) that routes based on the DSCP values overriding the normal BGP routing (e.g., default or a BGP route that's been specified by the network configurators). The data centers102may include a scheduler114to aid in performing network tests in tandem with a network tester124of the client device(s)104and/or to determine distribution and forwarding of application session host requests from a client device(s)104between and among other data centers102. For example, where a scheduler114of a data center102determines that the data center102is not able to host the application session—e.g., due to capacity limits, congestion, resource deficiencies, load balancing, etc.—the scheduler114may route the request to other data centers102to find a suitable data center to host the application session. Once a suitable data center102is determined, connection information (e.g., an IP address) of the selected data center102may be sent to the client device(s)104such that the client device(s)104executes the application session using the selected data center102. The client device(s)104may include one or more end-user device types, such as a smartphone, a laptop computer, a tablet computer, a desktop computer, a wearable device, a game console, a smart-home device that may include an AI agent or assistant, a virtual or augmented reality device or system, and/or another type of device. In some examples, the client device(s)104may include a combination of devices (e.g., a smartphone and a communicatively coupled smart watch or other wearable device), and the client applications122associated therewith, including interactions with the host application112, may be executed using one or more of the devices (e.g., smartphone application pushes notification to smartwatch application, user provides input to smartwatch, data representative of input is passed to another device of the system100via the smartphone). The client device(s)104may include one or more input/output devices, such as a keyboard, a mouse, a controller(s), a touch screen, a display(s), a speaker(s), a microphone, headphones, a headset (e.g., AR, VR, etc. that may provide inputs based on user movement), and/or other input/output device types. As such, in some embodiments, the application session may include a streams of data from the client device(s)104to the data center(s)102and from the data center(s)102to the client device(s)104. The streams of data may include, without limitation, an audio stream from the client device(s)104, an audio stream from the data center(s)102, a video stream from the data center(s)102, an input stream from the client device(s)104, and/or other stream types. The client device(s)104may include the client application122that may execute—along with the host application112—an application session. As an example, where the client application122and the host application112support cloud gaming, the client application122may access an API of the host application112to execute an instance of a particular game (e.g., an application session). As such, in embodiments, the application specific tests executed by the network tester124may correspond to the particular type of game being played using the cloud gaming system. Similarly, the application session for cloud VR may correspond to a cloud VR instance, the application session for remote desktop may correspond to a remote desktop instance, etc. In any embodiments, the applications may be executed using real-time streaming of data—e.g., to satisfy the high-performance, low latency nature of the applications. As such, the network tests may correspond to real-time streaming. For example, the application specific tests may be executed such that a suitable data center is capable of executing real-time streaming of the application session data. The network tester124may execute, or cause the execution of (e.g., by the data center(s)102), one or more network tests. For example, the network tests may include a latency test, a jitter test, a packet loss test, a bandwidth test, another test type, or a combination thereof. In some examples, the network tests may be used to generate an application profile using the profile generator126. For example, for a given client application122, application session, sub-application or program within the client application122, etc., the network tester124may execute application specific tests to determine the capability of the client device(s)104and the associated local network (e.g., including Wi-Fi, Ethernet, the client ISP108, or a combination thereof) for executing the client application122—or an application session thereof. In some embodiments, the profile generator126may generate a profile at the application level—e.g., for cloud gaming, the application profile may be used for any use of the client application122, such as for first person shooter type games, sports games, strategy games, and any other game types. In other embodiments, the application profile may be sub-application type specific—e.g., for cloud gaming, a first application profile may be generated for first person shooter type games, a second application profile for sports games, and so on. In further embodiments, the application profile may correspond to each separate program the client application122executes—e.g., for cloud gaming, an application profile may be generated for each individual game, and for deep learning, an application profile may be generated for each neural network model or deep learning framework, etc. In another embodiment, the application profile may correspond to inter-program types—e.g., for a first person shooter game in a cloud gaming environment, a first application profile may be generated for a capture the flag game type within the first person shooter game and a second application profile may be generated for a team death match game type. As such, the application profile may be generated by the profile generator126—and based on the network tests performed by the network tester124—for different levels of granularity within the client application122. The application profile may correspond to a quality of the stream that the client device(s)104can effectively support during gameplay. For example, the application profile may include information such as an image resolution (e.g., 720p, 1080p, 4K, 8K, etc.), a bit rate, audio quality (e.g., to and from the client device(s)104), and/or other information—as described herein—that may be informative to the system100when selecting a suitable data center102for the application session, and informative to the selected data center102to determine a quality of the stream to the client device(s)104—e.g., to send a highest quality stream supported to not degrade performance, but also to not send too high quality of a stream that the client device(s)104and/or the associated local network could not support. As such, the application profile may be used to satisfy quality expectations and requirements of a particular user for a particular application session, and also to load balance (e.g., pass application sessions to less congested data centers102) and conserve resources (e.g., don't allot an entire GPU for the application session when a partial, or virtual GPU (vGPU), is acceptable) at the data centers102. As such, the application profile may be stored—in embodiments—on the client device(s)104such that when submitting a request for a host of the application session to one or more data centers102, the application profile (or information corresponding thereto) may be included in the request. In some embodiments, the application profile may additionally or alternatively be stored on one or more data centers102(or an alternative device not illustrated, such as another computing device of a web support platform that may maintain application profiles for users). In any example, the application profile may be associated with an identity management system (IDS) of the system100. In some embodiments, the application profile may correspond to the results of the network tests. For example, the application profile may include a latency value, a packet loss value, a jitter value, a bandwidth value, and/or other values, and this information may be used by the data center102hosting the application session to determine the quality of the stream (e.g., video stream, audio stream to the client device(s)104, audio stream from the client device(s)104, input stream from the client device(s)104, etc.). In other embodiments, the application profile may correspond to the known (or tested for) supported quality of the stream, and may include data for image quality (e.g., standard dynamic range (SDR) vs. high dynamic range (HDR)), audio quality, encoding, etc. For example, the application profile may include information corresponding to (or may be used to determine information corresponding to) encoder settings, an encoding protocol (e.g., real time messaging protocol (RTMP), common media application format (CMAF), etc.), a video codec type, a frame rate, a keyframe sequence, an audio codec, a bit rate encoding type, a bit rate, a pixel aspect ratio (e.g., square, 4:3, 16:9, 24:11, etc.), a frame type (e.g., progressive scan, two B-frames, one reference frame), an entropy coding type, an audio sample rate, an audio bit rate, and/or other settings. In order to conduct the network test(s), and with reference toFIG.2A, the network tester124of the client device(s)104may execute a preliminary network test—e.g., for latency—between the client device104and a plurality of data centers102. For example, the client device104may query—e.g., via request signals204A,204B, and204C—an exposed API to determine the data centers102available to the client device104, and the client device104may execute (or cause each of the data centers102execute) a preliminary network test. In some embodiments, as illustrated, the API may return DNS information for regions (e.g., regions202A,202B, and202C, which may correspond to regions of a state, country, continent, etc.), and the request signals204may be directed to a region-based IP address. Once the request signal204is received, the request signal204may be forwarded to a specific data center within the region. The selection of a data center102within the region may be based on an alternating selection—e.g., for region202A, a first request signal204from a first client device104may be passed to data center102A, a second request signal204from a client device104may be passed to data center102B, a third request signal204may be passed to the data center102A, and so on. In other embodiments, the selection of the data center102for the region202may be based on congestion, capacity, compute resources, hardware types, etc., at each of the data centers102in the region202. Using regions instead of directing requests to each data center may reduce the run-time of the tests as less tests need to be performed (e.g., data centers102of the same region may be assumed to have the same performance characteristics). In some embodiments, however, the exposed API may provide specific data center addresses to the client device104, and the request signals204may not be region-based. Where the preliminary test is for latency, the preliminary network test may return latency values for each of the data centers102—e.g., one or more packets may be transmitted to the data centers102from the client device104, one or more packets may be transmitted from the data centers102to the client device104, or a combination thereof, and the time of transmission may be computed to determine latency scores. Once the latency values are known, the client device104may select—or a scheduler of one or more data centers102may select—an initial host data center102for the client device104to perform network testing. For example, the data center102with the lowest latency may be selected, any of the data centers102under a latency threshold may be selected, and/or another selection criteria may be used. Once a data center102is selected, the client device104may transmit another request signal204to the selected data center102to execute one or more additional network tests. For example, where the data center102A is selected, the client device104may transmit a request for a jitter test, a packet loss test, a bandwidth test, a latency test, and/or another type of test to the data center102A. The scheduler114of the data center102and/or the network tester124of the client device104may then execute one or more network tests. In some embodiments, the network tests may be executed using application specific traffic, as described herein. For example, when executing a bandwidth test, a standard, non-application specific bandwidth test may not return accurate results for a low latency, high performance game within a cloud gaming environment. This may be a result of a standard bandwidth test not resembling the bursty network traffic associated with the game. For example, with a high latency connection for a low latency game, by the time an input to the client device104is transmitted to the hosting data center102, the video stream is updated, and received and displayed by the client device104, the visual may be too late—resulting in a poor experience or causing the user to perform poorly in the game. To account for this, when requesting the bandwidth test, the request may indicate the game that the network test corresponds to, and the data center102A may generate simulated network traffic corresponding to the game. As such, the test results of the bandwidth of the local network of the client device104may more accurately reflect the bandwidth of the network for the particular game. This bandwidth information may then be associated with the application profile for the client device104. Similarly, other tests may be executed using simulated application specific traffic in order to generate an application profile that corresponds directly to the ability of the client device104and the associated local network to support application sessions of the application (e.g., game instances of the game in a cloud gaming environment). The application profile may be updated periodically. For example, the application profile may be updated at a recurring interval—e.g., every week, every month, etc.—and/or may be updated based on network characteristic changes. For example, when a new data center102becomes available to the client device104, the application profile may be updated in view of the new data center102. As another example, where a local network of the client device104changes—e.g., due to the user moving the client device104to a new location, updating their home network, etc.—the application profile may be updated. In some embodiments, user-facing information may be generated based on the network tests. For example, where the network connection is not strong, the bandwidth is low, etc., a recommendation may be generated for the user indicating—where the user device104is connected over Wi-Fi—to move closer to the router, use 5 ghz instead of 2.4 ghz, or vice versa, etc. As another example, a recommendation may be to have the user adjust settings of their device, to upgrade their Internet speed with the client ISP108, etc. As such, once the user makes updates, the network tests may be run again in an effort to more accurately determine the client device104and the associated networks capabilities to have optimal quality settings for the user for the application. Now referring toFIG.2B, once an application profile has been generated, the client device104may transmit a host request signal206to a data center102requesting that the data center102, or another data center102, host an application session. For example, when a user provides an input to the client device104indicating that the user wants to launch an application session (e.g., wants to participate in an instance of a game in a cloud gaming environment), the client device106may generate and transmit the host request signal206. In some embodiments, similar to the description with respect toFIG.2A, a preliminary network test—e.g., a latency test—may be conducted to determine and select a data center102A to transmit the host request signal206. Once a selected data center102A is determined, the host request signal206may be transmitted to the data center102A. The preliminary network test may return more than one suitable data center102(e.g., more than one data center102having acceptable latency). In such examples, the host request signal206may include data corresponding to addresses (e.g., IP addresses) of the other suitable data centers102, which may be used by the data center102A to determine where to send forward request signals208. In some embodiments, the host request signal206may further include data corresponding to the application profile of the client device104such that the data center102A may determine the quality of the streams, whether the data center102A can host the application session, and/or can include the application profile information in the forward request signals208. The scheduler114of the data center102A may receive the host request signal206, and determine whether to host or forward the application session. In some embodiments—and based on data in the host request signal206—the scheduler114may determine the application type (or program type, such as a particular game or a game type) and associated application performance requirements. For example, for a particular application to execute properly or effectively, the application may require a latency below a threshold value, a bit rate above a threshold value, etc., and this information may be used—in conjunction with the performance requirements from the application profile—to determine whether the data center102A can host the application session. Where the data center102A cannot host the application session, the application performance requirements, application profile information, and/or other information may be included in the forward request signals208to other data centers102. The scheduler114—e.g., based on a determination that the data center102A cannot host the application session—may transmit one or more forward request signals208to other suitable data centers102. The suitable data centers102may be determined, in embodiments, based on the data from the host request signal206from the client device104that included data corresponding to other data centers102that satisfied one or more values (e.g., latency values) from the preliminary network test(s). In some embodiments, in addition to or alternatively from the result of the preliminary network tests, the suitable data centers102may be selected based on application performance requirements (e.g., it may be known that certain data centers102cannot support particular applications), network performance requirements (e.g., it may be known that certain data centers102cannot support the quality of streams that a client device104and the associated network can handle, or that DC ISPs110of certain data centers102do not interact well with the client ISP108), hardware limitations (e.g., it may be known that certain data centers102do not have hardware for generating streams of required quality, which may include GPUs, CPUs, memory, a particular model or capability of a GPU, CPU, or memory, etc.). In the example ofFIG.2B, the suitable data centers102may include the data centers102B,102C, and102D, and the data center102E may have been determined not to be suitable (e.g., the latency may be too high, due to the data center102E being in the region202C that may be a far distance—e.g., 500+ miles—from a location of the client device104, the data center102E may not have the requisite hardware to generate a high-quality stream, such as a video stream generated using ray-tracing or path-tracing techniques, etc.). The same reasons another data center102—such as the data center102E—may not be suitable, may also be analyzed by the selected data center102A to determine that the data center102A cannot host the application session. The forward request signals208may be transmitted all at once to each suitable data center102, to a subset of the suitable data centers, etc., or may be transmitted to one suitable data center102at a time, and to additional data centers102based on denials of the forward request signals208. Where the forward request signals208are transmitted individually, the order in which the forward request signals208are transmitted may be determined based on the preliminary network test data (e.g., lowest latency first, highest latency last), distance from the data centers (e.g., closest data centers102to the data center102A first, furthers data centers102from the data center102A last), and/or some other criteria. For example, a data center102may always forward to a data center in the same region202first, and then may consider other regions202when none of the data centers102in the region202accept the forward request from the forward request signal208. The forward request signals208may be analyzed by the schedulers114of the receiving data centers102, and the schedulers114may determine—in view of all of the information available—whether the data center can host the application session. If the data center102cannot host the application session, the data center102may return a denial signal (not illustrated) to the data center102A. In the example ofFIG.2B, the data centers102B and102C may deny the forward request, and the data center102D may accept the forward request signal. Once accepted, the scheduler114of the data center102A may receive this information, and transmit an acceptance signal (not illustrated) to the client device104. The acceptance signal may include data representative of an address of the data center102D such that the client device104may establish a communicative coupling to the data center102D, and the data center102D may host the application session between the client application122and the host application112executing on the data center102D. With reference toFIG.2C, in some embodiments, the data center102A may be suitable for hosting the application session—e.g., the data center102A may have the requisite hardware, network characteristics, etc. to host the game without degradation—but may still forward the request to another data center102. For example, the data center102A may also monitor for congestion and, when the data center102A—based on historical data or currently available data (e.g., a large queue of users or a large wave of users who have recently requested initialization of the application)—anticipates a spike in traffic at a current time, or a future time during which the application session may be hosted, the data center102A may attempt to forward the request (e.g., via forward request signal208D) to another suitable data center102that does not have the same congestion issues but still satisfies the quality of service requirements of the application session. For example, the scheduler114of the data center102A may store historical data of other data centers, and may determine a data center having less congestion (e.g., data center102A may be on the West Coast of the United States and the host request signal206may be received at 9:00 PM PST, when heavy traffic is normal; however, the data center102E may be on the East Coast of the United States where it is 12:00 PM EST and the traffic is lighter). In some embodiments, in addition to or alternatively from congestion or anticipated traffic, the performance requirements of the application session may be taken into account when determining whether to forward the request. For example, where a game does not require low latency, the data center102A may save itself for users playing latency sensitive games and pass the request to the data center102E to host the application session—e.g., because the data center102E may satisfy the latency requirements of the game even though further from the client device104. As such, in addition to determining whether or not the data centers102can support the performance requirements of the application session without degradation, the scheduler114may analyze additional factors to determine whether to host the application session locally or forward the application session to another data center102. With reference toFIG.2D, once an application session has been initiated, QoS metrics may be monitored by the QoS monitor116to determine network traffic routing settings for the application session. For example, different DC ISPs110may perform better for the data center102D than other DC ISPs110, and the performance of the different DC ISPs110may be monitored during the application session to determine whether any routing updates should be made. As such, the current DC ISP110A may be monitored for QoS (e.g., for application session yield or other application quality metrics) and the DC ISP110B may be monitored for the same or similar metrics. Internal network quality issues for the DC ISPs110may adversely affect performance of the application session, and the network configurator(s)118of the data center102D may receive an indication from the QoS monitor116that network traffic should be switched away from the DC ISP110A and to the DC ISP110B. As such, if the application session performance falls below a threshold, and the performance over another DC ISP110is better, the QoS monitor116may submit an alert to switch network traffic to a network path that includes the better performing DC ISP110B. In some embodiments, the application performance metrics may be queried from application streaming metrics by the QoS monitor. The QoS monitor116may monitor network and/or application performance using network performance metrics (e.g., latency, packet loss, jitter, cost associated with different DC ISPs110, capacity associated with different DC ISPs110, etc.) and/or application performance metrics (e.g., streaming session yield, application QoS metrics, etc.) as inputs. These inputs may be determined by transmitting test probes (e.g., pings) and/or simulating application specific network traffic between and among the client device104and the data center102, and analyzing the resulting communications to determine the network and/or application performance metrics. For example, a REST interface (e.g., an API) may be exposed to enable the QoS monitor116to publish network path information such as an actual path information (e.g., which autonomous systems are configured for communication with other autonomous systems), network performance metrics (and/or data that may be analyzed to determine the same), and/or application performance metrics (or data that may be analyzed to determine the same). The QoS monitors116may be distributed within the system100depending on the type of network traffic information and/or the devices that the network traffic is to be monitored between. As such, the QoS monitor116may include a monitor executing on the data center102(e.g., for monitoring egress and/or ingress traffic between the data center102and the client device(s)104, and communicating information back to the QoS monitor116of the client device104) and/or a QoS monitor116executing on the client device(s)104(e.g., for testing traffic between the client device(s)104and the data center102, and communicating information back to the QoS monitor116of the data center102). In some embodiments, a single QoS monitor116may be split among two or more of the data centers102and/or the client device(s)104. For example, a first portion of a QoS monitor116may execute on the data center102and a second portion may execute on the client device104, and communications may be exchanged between the two for monitoring various network paths and testing end-to-end network and/or application performance metrics. Once an updated routing path is determined, the changes in network routing may be published or posted as messages to the network configurator(s)118. The network configurator(s)118may implement the routing updates on target network endpoints (e.g., network device(s)120), such as by updating import route maps and/or export route maps of core switches (e.g., by updating local preference values for a particular egress port and/or prepending autonomous system prefixes to export route maps for controlling ingress traffic using a route injector(s)). Now referring toFIGS.3-4, each block of methods300and400, described herein, comprises a computing process that may be performed using any combination of hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory. The methods300and400may also be embodied as computer-usable instructions stored on computer storage media. The methods300and400may be provided by a standalone application, a service or hosted service (standalone or in combination with another hosted service), or a plug-in to another product, to name a few. In addition, methods300and400are described, by way of example, with respect to the system100ofFIG.1. However, these methods may additionally or alternatively be executed by any one system, or any combination of systems, including, but not limited to, those described herein. Now referring toFIG.3,FIG.3is a flow diagram showing a method300for application session distribution and forwarding, in accordance with some embodiments of the present disclosure. The method300, at block B302, includes receiving data representative of a first request for a host of an application session. For example, the data center102A may receive the host request signal206from the client device104, which may include data indicating the application type (e.g., the specific game) and the application profile for the client device104. The method300, at block B304, includes determining application performance requirements for an application associated with the application session. For example, the data center102A may determine the application type—or the program type, such as a specific game in a game streaming environment—and the associated performance requirements for the application (e.g., latency, hardware, etc.). The method300, at block B306, includes determining, based on an analysis of a streaming profile of a user device and the application performance requirements, not to host the application session at a first data center. For example, the data center102A may determine not to host the application session based on determining that the network quality, hardware resources, congestion issues, and/or other criteria are not satisfied that would allow the application performance requirements and the application profile requirements to be met. The method300, at block B308, includes sending data representative of a second request to host the application session to a second data center. For example, the data center102A may transmit a forward request signal208to the data center102B including data corresponding to the application performance requirements and the application profile performance requirements. The method300, at block B310, includes receiving data representative of an acceptance to host the application session from the second data center. For example, the data center102A may receive an acceptance signal from the data center102B in response to the forward request signal208. The method300, at block B312, includes causing, based on the acceptance, network traffic corresponding to the application session to be routed to the second data center. For example, the data center102A may transmit data to the client device104indicating that the application session will be hosted by the data center102B, and the application session may be executed using the data center102B—e.g., network traffic corresponding to the application session may be transmitted to the data center102B. With reference now toFIG.4,FIG.4is a flow diagram showing a method400for application profile generation, in accordance with some embodiments of the present disclosure. The method400, at block B402, includes determining a plurality of data centers having an associated latency less than a threshold latency. For example, the client device104may execute a preliminary network test with a plurality of data centers102to determine suitable data centers for an application session. The method400, at block B404, includes transmitting a request to execute a network performance test(s) customized to an application. For example, once a data center102is selected, the client device104may transmit a request for executing one or more network tests using simulated traffic for the application that will be the subject of an application session. The method400, at block B406, includes exchanging network traffic and associated performance metrics to the network performance test(s). For example, data representing application network traffic may be transmitted between the client device104and the data center102, and data gleaned from the tests may be shared between and among the devices. The method400, at block B408, includes generating an application profile corresponding to the application based on the associated performance metrics. For example, the profile generator126may generate an application profile corresponding to the application type (or sub-program thereof) based on the associated performance metrics from the network test(s) (e.g., latency, packet loss, jitter, bandwidth, etc.). For example, video stream quality, audio stream quality, input stream quality, and/or other quality determination may be made. The method400, at block B410, includes transmitting data representative of a request for a host of an application session, the request including information corresponding to the application profile. For example, the application profile information may be included in a host request signal206to a data center102when finding a suitable host for an application session. Example Game Streaming System Now referring toFIG.5,FIG.5is an example system diagram for a game streaming system500, in accordance with some embodiments of the present disclosure.FIG.5includes game server(s)502(which may include similar components, features, and/or functionality to the example computing device600ofFIG.6), client device(s)504(which may include similar components, features, and/or functionality to the example computing device600ofFIG.6), and network(s)506(which may be similar to the network(s) described herein). In some embodiments of the present disclosure, the system500may be implemented. In the system500, for a game session, the client device(s)504may only receive input data in response to inputs to the input device(s), transmit the input data to the game server(s)502, receive encoded display data from the game server(s)502, and display the display data on the display524. As such, the more computationally intense computing and processing is offloaded to the game server(s)502(e.g., rendering—in particular ray or path tracing—for graphical output of the game session is executed by the GPU(s) of the game server(s)502). In other words, the game session is streamed to the client device(s)504from the game server(s)502, thereby reducing the requirements of the client device(s)504for graphics processing and rendering. For example, with respect to an instantiation of a game session, a client device504may be displaying a frame of the game session on the display524based on receiving the display data from the game server(s)502. The client device504may receive an input to one of the input device(s) and generate input data in response. The client device504may transmit the input data to the game server(s)502via the communication interface520and over the network(s)506(e.g., the Internet), and the game server(s)502may receive the input data via the communication interface518. The CPU(s) may receive the input data, process the input data, and transmit data to the GPU(s) that causes the GPU(s) to generate a rendering of the game session. For example, the input data may be representative of a movement of a character of the user in a game, firing a weapon, reloading, passing a ball, turning a vehicle, etc. The rendering component512may render the game session (e.g., representative of the result of the input data) and the render capture component514may capture the rendering of the game session as display data (e.g., as image data capturing the rendered frame of the game session). The rendering of the game session may include ray or path-traced lighting and/or shadow effects, computed using one or more parallel processing units—such as GPUs, which may further employ the use of one or more dedicated hardware accelerators or processing cores to perform ray or path-tracing techniques—of the game server(s)502. The encoder516may then encode the display data to generate encoded display data and the encoded display data may be transmitted to the client device504over the network(s)506via the communication interface518. The client device504may receive the encoded display data via the communication interface520and the decoder522may decode the encoded display data to generate the display data. The client device504may then display the display data via the display524. Example Computing Device FIG.6is a block diagram of an example computing device(s)600suitable for use in implementing some embodiments of the present disclosure. Computing device600may include an interconnect system602that directly or indirectly couples the following devices: memory604, one or more central processing units (CPUs)606, one or more graphics processing units (GPUs)608, a communication interface610, input/output (I/O) ports612, input/output components614, a power supply616, one or more presentation components618(e.g., display(s)), and one or more logic units620. Although the various blocks ofFIG.6are shown as connected via the interconnect system602with lines, this is not intended to be limiting and is for clarity only. For example, in some embodiments, a presentation component618, such as a display device, may be considered an I/O component614(e.g., if the display is a touch screen). As another example, the CPUs606and/or GPUs608may include memory (e.g., the memory604may be representative of a storage device in addition to the memory of the GPUs608, the CPUs606, and/or other components). In other words, the computing device ofFIG.6is merely illustrative. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “desktop,” “tablet,” “client device,” “mobile device,” “hand-held device,” “game console,” “electronic control unit (ECU),” “virtual reality system,” and/or other device or system types, as all are contemplated within the scope of the computing device ofFIG.6. The interconnect system602may represent one or more links or busses, such as an address bus, a data bus, a control bus, or a combination thereof. The interconnect system602may include one or more bus or link types, such as an industry standard architecture (ISA) bus, an extended industry standard architecture (EISA) bus, a video electronics standards association (VESA) bus, a peripheral component interconnect (PCI) bus, a peripheral component interconnect express (PCIe) bus, and/or another type of bus or link. In some embodiments, there are direct connections between components. As an example, the CPU606may be directly connected to the memory604. Further, the CPU606may be directly connected to the GPU608. Where there is direct, or point-to-point connection between components, the interconnect system602may include a PCIe link to carry out the connection. In these examples, a PCI bus need not be included in the computing device600. The memory604may include any of a variety of computer-readable media. The computer-readable media may be any available media that may be accessed by the computing device600. The computer-readable media may include both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, the computer-readable media may comprise computer-storage media and communication media. The computer-storage media may include both volatile and nonvolatile media and/or removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, and/or other data types. For example, the memory604may store computer-readable instructions (e.g., that represent a program(s) and/or a program element(s), such as an operating system. Computer-storage media may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by computing device600. As used herein, computer storage media does not comprise signals per se. The computer storage media may embody computer-readable instructions, data structures, program modules, and/or other data types in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” may refer to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, the computer storage media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media. The CPU(s)606may be configured to execute at least some of the computer-readable instructions to control one or more components of the computing device600to perform one or more of the methods and/or processes described herein. The CPU(s)606may each include one or more cores (e.g., one, two, four, eight, twenty-eight, seventy-two, etc.) that are capable of handling a multitude of software threads simultaneously. The CPU(s)606may include any type of processor, and may include different types of processors depending on the type of computing device600implemented (e.g., processors with fewer cores for mobile devices and processors with more cores for servers). For example, depending on the type of computing device600, the processor may be an Advanced RISC Machines (ARM) processor implemented using Reduced Instruction Set Computing (RISC) or an x86 processor implemented using Complex Instruction Set Computing (CISC). The computing device600may include one or more CPUs606in addition to one or more microprocessors or supplementary co-processors, such as math co-processors. In addition to or alternatively from the CPU(s)606, the GPU(s)608may be configured to execute at least some of the computer-readable instructions to control one or more components of the computing device600to perform one or more of the methods and/or processes described herein. One or more of the GPU(s)608may be an integrated GPU (e.g., with one or more of the CPU(s)606and/or one or more of the GPU(s)608may be a discrete GPU. In embodiments, one or more of the GPU(s)608may be a coprocessor of one or more of the CPU(s)606. The GPU(s)608may be used by the computing device600to render graphics (e.g., 3D graphics) or perform general purpose computations. For example, the GPU(s)608may be used for General-Purpose computing on GPUs (GPGPU). The GPU(s)608may include hundreds or thousands of cores that are capable of handling hundreds or thousands of software threads simultaneously. The GPU(s)608may generate pixel data for output images in response to rendering commands (e.g., rendering commands from the CPU(s)606received via a host interface). The GPU(s)608may include graphics memory, such as display memory, for storing pixel data or any other suitable data, such as GPGPU data. The display memory may be included as part of the memory604. The GPU(s)608may include two or more GPUs operating in parallel (e.g., via a link). The link may directly connect the GPUs (e.g., using NVLINK) or may connect the GPUs through a switch (e.g., using NVSwitch). When combined together, each GPU608may generate pixel data or GPGPU data for different portions of an output or for different outputs (e.g., a first GPU for a first image and a second GPU for a second image). Each GPU may include its own memory, or may share memory with other GPUs. In addition to or alternatively from the CPU(s)606and/or the GPU(s)608, the logic unit(s)620may be configured to execute at least some of the computer-readable instructions to control one or more components of the computing device600to perform one or more of the methods and/or processes described herein. In embodiments, the CPU(s)606, the GPU(s)608, and/or the logic unit(s)620may discretely or jointly perform any combination of the methods, processes and/or portions thereof. One or more of the logic units620may be part of and/or integrated in one or more of the CPU(s)606and/or the GPU(s)608and/or one or more of the logic units620may be discrete components or otherwise external to the CPU(s)606and/or the GPU(s)608. In embodiments, one or more of the logic units620may be a coprocessor of one or more of the CPU(s)606and/or one or more of the GPU(s)608. Examples of the logic unit(s)620include one or more processing cores and/or components thereof, such as Tensor Cores (TCs), Tensor Processing Units (TPUs), Pixel Visual Cores (PVCs), Vision Processing Units (VPUs), Graphics Processing Clusters (GPCs), Texture Processing Clusters (TPCs), Streaming Multiprocessors (SMs), Tree Traversal Units (TTUs), Artificial Intelligence Accelerators (AIAs), Deep Learning Accelerators (DLAs), Arithmetic-Logic Units (ALUs), Application-Specific Integrated Circuits (ASICs), Floating Point Units (FPUs), input/output (I/O) elements, peripheral component interconnect (PCI) or peripheral component interconnect express (PCIe) elements, and/or the like. The communication interface610may include one or more receivers, transmitters, and/or transceivers that enable the computing device600to communicate with other computing devices via an electronic communication network, included wired and/or wireless communications. The communication interface610may include components and functionality to enable communication over any of a number of different networks, such as wireless networks (e.g., Wi-Fi, Z-Wave, Bluetooth, Bluetooth LE, ZigBee, etc.), wired networks (e.g., communicating over Ethemet or InfiniBand), low-power wide-area networks (e.g., LoRaWAN, SigFox, etc.), and/or the Internet. The I/O ports612may enable the computing device600to be logically coupled to other devices including the I/O components614, the presentation component(s)618, and/or other components, some of which may be built in to (e.g., integrated in) the computing device600. Illustrative I/O components614include a microphone, mouse, keyboard, joystick, game pad, game controller, satellite dish, scanner, printer, wireless device, etc. The I/O components614may provide a natural user interface (NUI) that processes air gestures, voice, or other physiological inputs generated by a user. In some instances, inputs may be transmitted to an appropriate network element for further processing. An NUI may implement any combination of speech recognition, stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, and touch recognition (as described in more detail below) associated with a display of the computing device600. The computing device600may be include depth cameras, such as stereoscopic camera systems, infrared camera systems, RGB camera systems, touchscreen technology, and combinations of these, for gesture detection and recognition. Additionally, the computing device600may include accelerometers or gyroscopes (e.g., as part of an inertia measurement unit (IMU)) that enable detection of motion. In some examples, the output of the accelerometers or gyroscopes may be used by the computing device600to render immersive augmented reality or virtual reality. The power supply616may include a hard-wired power supply, a battery power supply, or a combination thereof. The power supply616may provide power to the computing device600to enable the components of the computing device600to operate. The presentation component(s)618may include a display (e.g., a monitor, a touch screen, a television screen, a heads-up-display (HUD), other display types, or a combination thereof), speakers, and/or other presentation components. The presentation component(s)618may receive data from other components (e.g., the GPU(s)608, the CPU(s)606, etc.), and output the data (e.g., as an image, video, sound, etc.). Example Network Environments Network environments suitable for use in implementing embodiments of the disclosure may include one or more client devices, servers, network attached storage (NAS), other backend devices, and/or other device types. The client devices, servers, and/or other device types (e.g., each device) may be implemented on one or more instances of the computing device(s)600ofFIG.6—e.g., each device may include similar components, features, and/or functionality of the computing device(s)600. Components of a network environment may communicate with each other via a network(s), which may be wired, wireless, or both. The network may include multiple networks, or a network of networks. By way of example, the network may include one or more Wide Area Networks (WANs), one or more Local Area Networks (LANs), one or more public networks such as the Internet and/or a public switched telephone network (PSTN), and/or one or more private networks. Where the network includes a wireless telecommunications network, components such as a base station, a communications tower, or even access points (as well as other components) may provide wireless connectivity. Compatible network environments may include one or more peer-to-peer network environments—in which case a server may not be included in a network environment—and one or more client-server network environments—in which case one or more servers may be included in a network environment. In peer-to-peer network environments, functionality described herein with respect to a server(s) may be implemented on any number of client devices. In at least one embodiment, a network environment may include one or more cloud-based network environments, a distributed computing environment, a combination thereof, etc. A cloud-based network environment may include a framework layer, a job scheduler, a resource manager, and a distributed file system implemented on one or more of servers, which may include one or more core network servers and/or edge servers. A framework layer may include a framework to support software of a software layer and/or one or more application(s) of an application layer. The software or application(s) may respectively include web-based service software or applications. In embodiments, one or more of the client devices may use the web-based service software or applications (e.g., by accessing the service software and/or applications via one or more application programming interfaces (APIs)). The framework layer may be, but is not limited to, a type of free and open-source software web application framework such as that may use a distributed file system for large-scale data processing (e.g., “big data”). A cloud-based network environment may provide cloud computing and/or cloud storage that carries out any combination of computing and/or data storage functions described herein (or one or more portions thereof). Any of these various functions may be distributed over multiple locations from central or core servers (e.g., of one or more data centers that may be distributed across a state, a region, a country, the globe, etc.). If a connection to a user (e.g., a client device) is relatively close to an edge server(s), a core server(s) may designate at least a portion of the functionality to the edge server(s). A cloud-based network environment may be private (e.g., limited to a single organization), may be public (e.g., available to many organizations), and/or a combination thereof (e.g., a hybrid cloud environment). The client device(s) may include at least some of the components, features, and functionality of the example computing device(s)600described herein with respect toFIG.6. By way of example and not limitation, a client device may be embodied as a Personal Computer (PC), a laptop computer, a mobile device, a smartphone, a tablet computer, a smart watch, a wearable computer, a Personal Digital Assistant (PDA), an MP3 player, a virtual reality headset, a Global Positioning System (GPS) or device, a video player, a video camera, a surveillance device or system, a vehicle, a boat, a flying vessel, a virtual machine, a drone, a robot, a handheld communications device, a hospital device, a gaming device or system, an entertainment system, a vehicle computer system, an embedded system controller, a remote control, an appliance, a consumer electronic device, a workstation, an edge device, any combination of these delineated devices, or any other suitable device. The disclosure may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program modules, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program modules including routines, programs, objects, components, data structures, etc., refer to code that perform particular tasks or implement particular abstract data types. The disclosure may be practiced in a variety of system configurations, including hand-held devices, consumer electronics, general-purpose computers, more specialty computing devices, etc. The disclosure may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network. As used herein, a recitation of “and/or” with respect to two or more elements should be interpreted to mean only one element, or a combination of elements. For example, “element A, element B, and/or element C” may include only element A, only element B, only element C, element A and element B, element A and element C, element B and element C, or elements A, B, and C. In addition, “at least one of element A or element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. Further, “at least one of element A and element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. The subject matter of the present disclosure is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this disclosure. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.
73,040
11857873
DETAILED DESCRIPTION Firstly, a configuration in which a computer serves as a mobile terminal, and a game program is implemented as a so-called native application (native game) and may be executed by the mobile terminal may be described in exemplary embodiments with reference toFIGS.1to4. Subsequently, a configuration in which the computer serves as a server device, the game program is implemented as a so-called web application (web game), part or all of the game program may be executed by the server device, and the result of processes executed by the server device may be returned to the mobile terminal may be described in exemplary embodiments with reference toFIG.5. Exemplary embodiments may be described with reference toFIGS.1to4. A puzzle game provided in the disclosure may be a game with the following exemplary features. If it is determined that puzzle attribute information and/or puzzle arrangement information satisfies a predetermined condition, an object displayed on or around a puzzle element having the puzzle attribute information and/or the puzzle arrangement information that satisfies the predetermined condition may be moved (for example, by falling). The object may be moved (for example, by falling) to produce a predetermined effect in the puzzle game (for example, an effect of causing the object to collide with a game content (for example, an enemy character) to damage the game content) to earn a score. ExemplaryFIG.1is a block diagram illustrating an example configuration of main components of a mobile terminal100. The mobile terminal (computer)100is an information processing device capable of executing a game program according to this embodiment. The information processing device may be any device capable of executing processes included in the game program, and may be implemented using the mobile terminal100or any other electronic device such as a smartphone, a tablet terminal, a mobile phone (or feature phone), a home video game console, a personal computer, or the like. As illustrated by way of example in exemplaryFIG.1, the mobile terminal100may include a control unit10, an input unit40, a display unit50, and a storage unit30. The control unit10may collectively control various functions of the mobile terminal100. The control unit10may include an input obtaining unit11, a puzzle display information output unit12, an object display information output unit13, a puzzle information update unit14, a time restriction unit15, a puzzle element condition determination unit16, a puzzle element deletion unit17, an object moving unit18, an effect producing unit19, and a display processing unit20. The puzzle display information output unit (puzzle display information output function)12may output puzzle display information5dfor causing a number of puzzle elements forming part of a puzzle to be displayed in a game field so that at least one puzzle element among the number of puzzle elements is selectable in accordance with data input by a player. The game field may have three-dimensional information concerning width, height, and length. The puzzle display information output unit12can also output puzzle display information5dfor causing the number of puzzle elements to be displayed in the game field so that at least one puzzle element among the number of puzzle elements is selectable along a path drawn through an operation on a predetermined input screen. The puzzle display information5doutput from the puzzle display information output unit12may be input to the display processing unit20. The display processing unit20may generate screen information6aconcerning a puzzle game screen in accordance with the puzzle display information5d, and may output the screen information6ato the display unit50. The object display information output unit (object display information output function)13may output object display information5efor causing an object to be displayed on a screen on which the number of puzzle elements are displayed. The object display information5eoutput from the object display information output unit13may be input to the display processing unit20. The display processing unit20may generate screen information6aconcerning a puzzle game screen in accordance with the object display information5e, and may output the screen information6ato the display unit50. The puzzle information update unit (puzzle information update function)14may update puzzle attribute information5f1associated with a selected puzzle element and/or puzzle arrangement information5f2associated with the selected puzzle element and indicate the arrangement of the selected puzzle element. The time restriction unit (time restriction function)15may be configured to restrict the time during which the player can select at least one puzzle element. The puzzle attribute information5f1and/or the puzzle arrangement information5f2output from the puzzle display information output unit12may be input to the puzzle element condition determination unit16. Alternatively, the puzzle attribute information5f1and/or the puzzle arrangement information5f2may be input to the puzzle information update unit14, and the puzzle information update unit14may update the puzzle attribute information5f1and/or the puzzle arrangement information5f2to produce puzzle update information5g, and output the puzzle update information5g. The puzzle update information5gmay be input to the puzzle element condition determination unit16. Time restriction information5houtput from the time restriction unit15may be input to the puzzle element condition determination unit16. After the puzzle attribute information5f1and/or the puzzle arrangement information5f2has been updated by the puzzle information update unit14, the puzzle element condition determination unit (puzzle element condition determination function)16may determine whether or not the updated puzzle attribute information5f1and/or puzzle arrangement information5f2satisfies a predetermined condition. If the puzzle element condition determination unit (puzzle element condition determination function)16determines that the puzzle attribute information5f1and/or the puzzle arrangement information5f2satisfies the predetermined condition, the puzzle element deletion unit (puzzle element deletion function)17can delete from the game field a puzzle element associated with the puzzle attribute information5f1and/or the puzzle arrangement information5f2that satisfies the predetermined condition. The puzzle attribute information5f1and/or the puzzle arrangement information5f2output from the puzzle display information output unit12, the puzzle update information5goutput from the puzzle information update unit14, or the time restriction information5houtput from the time restriction unit15may be input to the puzzle element condition determination unit (puzzle element condition determination function)16. Puzzle element condition positive-determination information5iindicating that the puzzle element condition determination unit16determines that the puzzle attribute information5f1and/or puzzle arrangement information5f2satisfies the predetermined condition may be input to the puzzle element deletion unit17or the object moving unit18. Puzzle element deletion information5jindicating that the puzzle element deletion unit17has deleted a puzzle element may be input to the object moving unit18. On the other hand, puzzle element condition negative-determination information5kindicating that the puzzle element condition determination unit16determines that the puzzle attribute information5f1and/or puzzle arrangement information5f2does not satisfy the predetermined condition may be input to the display processing unit20, and screen information6aconcerning a puzzle game screen may be generated. The screen information6amay be output to the display unit50. If the puzzle element condition determination unit (puzzle element condition determination function)16determines that the puzzle attribute information5f1and/or the puzzle arrangement information5f2satisfies the predetermined condition, the object moving unit (object moving function)18may move an object displayed on or around a puzzle element associated with the puzzle attribute information5f1and/or the puzzle arrangement information5f2to produce a predetermined effect in the game. The object moving unit (object moving function)18can move an object displayed on or around a puzzle element deleted by the puzzle element deletion unit (puzzle element deletion function)17. The object moving unit (object moving function)18can move an object so that the object, for example, falls off the game field in a direction away from the player's viewpoint in response to the deletion of a puzzle element displayed so as to support the object. Alternatively, the object may be moved in other ways, as desired. After the object has moved, the effect producing unit (effect producing function)19can produce, as a predetermined effect, an effect of changing status information possessed by a game content at a place to which the object has been moved. In some exemplary embodiments, the object may be assigned specific characteristics. Object display movement information5loutput from the object moving unit18may be input to the display processing unit20or the effect producing unit19, and effect producing information5moutput from the effect producing unit19may be input to the display processing unit20. The display processing unit20may generate screen information6aconcerning a puzzle game screen which can present a result of the series of processes to the player, in accordance with the puzzle display information5dinput from the puzzle display information output unit12, the object display information5einput from the object display information output unit13, the puzzle element condition negative-determination information5kinput from the puzzle element condition determination unit16, the object display movement information5linput from the object moving unit18, and the effect producing information5minput from the effect producing unit19, and may output the screen information6ato the display unit50. Thus, the game screen may be displayed on the display unit50. The input unit40may accept an operation performed by the player. In an exemplary embodiment, the input unit40may be a touch panel. The input unit40may include an input screen41and an input control unit42. Inputs may not necessarily be provided to the mobile terminal100by a touch operation using the touch panel. For example, in alternative embodiments, inputs may be provided by pressing a predetermined input key, or by other methods, as desired. As would be understood by a person of ordinary skill in the art, the input screen41may be any device capable of sensing a position specified by an operation performed by the player (for example, a touch screen included in the touch panel). The input screen41may output a touch signal5acorresponding to the specified position to the input control unit42. The input control unit42may generate coordinate information5bbased on the touch signal5ainput from the input screen41. The coordinate information5bmay include, for example, information concerning the coordinates of the specified position on the input screen41. The input control unit42may output the coordinate information5bto the input obtaining unit11. The input obtaining unit (input obtaining function)11may obtain data input by the player, and generate instruction information (input data)5cin accordance with the coordinate information5binput from the input control unit42. The input obtaining unit (input obtaining function)11may output the instruction information5cto the puzzle display information output unit12and the object display information output unit13. As would be understood by a person of ordinary skill in the art, the display unit50may be any device on which a game screen is displayed. For example, in an exemplary embodiment, the display unit50may be a liquid crystal display. In exemplaryFIG.1, the input unit40and the display unit50are illustrated as separate units in order to clearly distinguish between the functions of the input unit40and the functions of the display unit50. For example, if the input unit40is a touch panel and the display unit50is a liquid crystal display, the input unit40and the display unit50may be formed into a single unit. The storage unit30may be a storage device implemented as a recording medium, such as a hard disk, a solid state drive (SSD), a semiconductor memory, a digital versatile disc (DVD), or the like, and is configured to store data and a game program capable of controlling the mobile terminal100. ExemplaryFIGS.2A to2Fare schematic diagrams illustrating transitions of a game screen for a puzzle game implemented by the game program. ExemplaryFIG.2Ais a schematic diagram illustrating six objects (which look like diamonds) displayed on a screen on which a number of puzzle elements are displayed. In some exemplary embodiments, an object may be configured to produce a predetermined effect in a puzzle game. The term “predetermined effect”, as used herein, refers to, for example, but is not limited to, an effect of moving an object displayed on or around a puzzle element so that the object comes into contact (or collides) with a game content (an enemy character such as a monster) to damage the game content, where the object has been assigned specific characteristics such as weapons with which the game content can be defeated. The object can also damage the game content without coming into contact (or colliding) with the game content. Examples of such an effect include an effect of damaging a game content surrounding the object by the object exploding, or the like. In the manner described above, examples of the predetermined effect may include, in addition to an effect of damaging a game content, an effect of indirectly affecting a game content. Examples of such an effect include an effect of reducing the defensive power of the game content, and an effect of restoring the game content. Other examples may include an effect of changing a destination to which a game content having a feature of being able to move is expected to move to limit the path of movement of the game content, and an effect of changing the movement speed of the game content. In the manner described above, a predetermined effect is typically produced in a manner in which an object comes into contact (or may not come into contact) with a game content to damage the game content, or in which an object indirectly affects a game content, for example. Such various manners in which a predetermined effect is produced may improve gameplay and provide even a high-level player with a highly entertaining game. In exemplary embodiments, examples of the object include a character and an item. Here, the term “character” may refer to an entity that looks like something in the real world (in exemplaryFIGS.2A to2F, an entity that looks like a diamond), or may be used to include virtual entities in a game, such as humans, animals, creatures, monsters, weapons, and the like. In exemplary embodiments, examples of the manner in which specific characteristics are assigned to an object in order to produce a “predetermined effect” include, but are not limited to, manners in which (1) specific characteristics are assigned to an object, (2) specific characteristics selected by a player are assigned to an object, (3) specific characteristics corresponding to a character selected by a player are assigned to an object, and various combinations of (1) to (3) described above. In an example of the manner (1) described above, for example, specific characteristics (size, shape, weight, etc.) may be assigned to an object in advance by the game program. In an example of the manner (2) described above, for example, specific characteristics (size, shape, weight, etc.) selected by a player may be assigned to an object. In an example of the manner (3) described above, for example, as illustrated in exemplaryFIGS.3A and3B, specific characteristics corresponding to a character drawn on a card selected by a player may be assigned to an object. The manner (3) described above will be described in detail below with reference to exemplaryFIGS.3A and3B. In exemplary embodiments, a number of puzzle elements forming a puzzle may have puzzle attribute information and/or puzzle arrangement information. The puzzle attribute information may be information associated with each of the number of puzzle elements. The puzzle arrangement information may be information associated with each of the number of puzzle elements and indicate the arrangement of the corresponding one of the puzzle elements. In exemplary embodiments, the puzzle attribute information can include, for example, but is not limited to, attributes of each of a number of puzzle elements on the game program, such as color, shape, size, and character or item type, and a difference in attribute is distinguishable by a player. In exemplaryFIG.2A, puzzle attribute information included in each puzzle element can be distinguished by the player by a difference in color between white and black. In exemplary embodiments, the puzzle arrangement information can be, for example, but not limited to, on the game program, information on the arrangement of a number of puzzle elements under a specific rule. In exemplaryFIG.2A, puzzle elements are arranged in order in an array of six rows and five columns. Alternatively, puzzle elements may be arranged in order in an array of any other size, or may be arranged randomly. In the puzzle game according to the exemplary embodiments, elements of other games such as bingo and slot machines may be combined to create puzzle arrangement information. In exemplary embodiments, furthermore, the arrangement of an object (i.e., a way in which an object is placed) is not limited to any specific arrangement so long as the object is displayed on a screen on which a number of puzzle elements are displayed, but in one example the object may be arranged at a position where the object is ready to exert a predetermined effect. For example, an object may be ready to exert a predetermined effect when arranged (or placed) at a position that spans the squares of several puzzle elements. Alternatively, as illustrated in exemplaryFIG.2A, in a case where game media (for example, enemy characters) X and Y are transparent to enable the player to anticipate in advance the enemy positions, the player may strategically select the arrangement of an object (i.e., a way in which an object is placed) to make it easier for the object to collide with the game media X and Y. Further, if a game content has a feature of being able to move, the player may anticipate the path of movement of the game content and may select the arrangement of an object (i.e., a way in which an object is placed) to make it easier for the object to exert a predetermined effect. Alternatively, the player may effectively combine objects in accordance with various specific characteristics assigned to each of the objects, and may select the arrangement of the objects (i.e., a way in which the objects are placed) to make it easier for the objects to exert a predetermined effect. In the manner described above, a player may be able to make their own selection of the arrangement of an object (i.e., a way in which an object is placed) from among various arrangements. Such various arrangements may improve gameplay and provide even a high-level player with a highly entertaining game. In addition, in the arrangement of an object, the viewpoint of the player may be changed. Changing the viewpoint of the player may be advantageous because it may facilitate viewing of the positional relationship with respect to a game content and facilitate viewing of the path of movement of a game content. In exemplary embodiments, furthermore, an object that produces a predetermined effect may be assigned a different set of characteristics, and the predetermined effect produced by the object may differ depending on the characteristics assigned to the object. For example, an object may be assigned a “weight” element, resulting in an increase in the falling speed when the object moves, so as to increase the effect of damaging a game content. Additionally, an object may be assigned a “shape” element or a “size” element, resulting in an increase in the range over which a game content is damaged. In addition, taking a weak point, or the like, of a game content into account, the player may be able to strategically select an optimum object from among objects having different sets of characteristics. Furthermore, a limitation may be imposed on the position at which an object can be arranged, or on the number of times an object can be arranged, in accordance with various specific characteristics assigned to each object. For example, in a case where the effect of damaging a game content by using specific characteristics assigned to an object is large, a configuration may be used in which the number of times the object can be arranged is limited and the object can be arranged only around the center of a puzzle element. On the other hand, in a case where the effect of damaging a game content by using specific characteristics assigned to an object is small, a configuration may be used in which the number of times the object can be arranged is unlimited and the object can be arranged at any position in a puzzle element. Other examples of characteristics which may be assigned to an object include characteristics capable of launching an intellectual attack, characteristics capable of attracting an enemy using meat or the like as a decoy, characteristics capable of launching an attack while rolling after falling, and characteristics capable of improving the visibility of the enemy position even in a dark scene. Such a variety of sets of characteristics may be used, as desired. It may be desirable that the player strategically take into account the characteristics of an object from among a variety of sets of characteristics assigned to the object as described above, by taking into account the attribute and the like of a game content (enemy character), arrange the object, and cause the object to fall in order to inflict effective damage on the game content. In the manner described above, there are a variety of sets of specific characteristics which may be assigned to an object, and a player is able to make their own selection of a desired object from among the sets of specific characteristics. This may improve gameplay and provide even a high-level player with a highly entertaining game. In exemplary embodiments, a condition may be added in which a player is unable to damage a game content (enemy character) unless the player selects an object in accordance with the attribute, and the like, of the enemy character. For example, a condition may be added in which an enemy character having the “iron” attribute can be defeated only by an object having the characteristics of lightning. In addition, each enemy character may have a weak point, and a condition may be added in which an enemy character can be defeated only when the attribute overcomes the weak point. A condition may further be added in which the player is able to attack an enemy character on a stage including a dark scene only after the player has dropped an object for illuminating the surroundings. If a mirror exists in the game field, a condition may further be added in which the player is able to attack an enemy character only after the player has broken the mirror because the mirror prevents the player from distinguishing the true enemy character from false ones. In the manner described above, there are a variety of manners in which various conditions are added unless an object is selected in accordance with the attribute and the like of a game content, and the player is able to make their own selection of a desired object from among them. This may improve gameplay and provide even a high-level player with a highly entertaining game. ExemplaryFIG.2Bis a schematic diagram illustrating puzzle elements selected by a player. In exemplaryFIG.2B, the arrow on a screen on which puzzle elements are displayed may indicate the path along which the player has selected puzzle elements. In exemplary embodiments, a puzzle element may be selected by using any method by which at least one puzzle element can be selected by a player. For example, a puzzle element may be selected by using a tap operation, or a puzzle element may be selected by using a drag operation (by selecting puzzle elements while tracing a path through the puzzle elements). A puzzle element may be selected by using any other method known to those of ordinary skill in the art. For ease of operation, it may be preferable to select a puzzle element by using a drag operation. In exemplaryFIG.2B, the player traces a path through the squares of the third, fourth, and fifth puzzle elements horizontally from the left in the first row, the squares of the second, third, and fourth puzzle elements vertically from the top in the first column from the right, and the squares of the fourth, fifth, and sixth puzzle elements vertically from the top in the second column from the right, thereby selecting the puzzle elements in the path. A limitation may be imposed on the range over which the player can select puzzle elements by tracing a path through the squares of the puzzle elements. In exemplary embodiments, furthermore, a time limit may be imposed on the player's selection of a puzzle element. Any method may be used to impose a time limit on the selection of a puzzle element. For example, an indicator of the time remaining in the time limit may be displayed at a certain location within the game field in a count-down mode, and/or a sound effect may be added to produce an effect of giving some tension to the player. ExemplaryFIG.2Cis a schematic diagram illustrating a change in puzzle attribute information of a puzzle element selected by a player. In exemplary embodiments, puzzle attribute information associated with each of a number of puzzle elements may be changed by a player through selection. Puzzle attribute information may be changed in any way, and any content may be changed. For example, in exemplaryFIG.2B, when selected by a player (the path along which selection has been made is indicated by the arrow), the square of a black puzzle element is changed (or turned) to white, and the square of a white puzzle element is changed (or turned) to black. ExemplaryFIG.2Dis an exemplary schematic diagram of a number of puzzle elements in a case where it is determined that puzzle attribute information and puzzle arrangement information associated with each of the number of puzzle elements satisfy a predetermined condition. In exemplary embodiments, if it is determined that puzzle attribute information and/or puzzle arrangement information satisfies a predetermined condition, an object displayed on or around a puzzle element for which the predetermined condition is satisfied may be moved. In some embodiments, in terms of the enhanced entertainment of the puzzle game, both puzzle attribute information and puzzle arrangement information may satisfy a predetermined condition. The predetermined condition can be designed, as desired, on the game program. In one example, it may be determined that the predetermined condition is satisfied if puzzle elements are arranged in a predetermined layout, and the puzzle elements have the same attribute, for example, if all the puzzle elements in any one of the vertical columns or horizontal rows are regarded as having the same attribute. The predetermined condition may also be designed in a different manner in accordance with the characteristics of a puzzle element included in the puzzle, or in accordance with the characteristics of an object displayed on the puzzle element. For instance, the predetermined condition may be a condition in which, of three puzzle elements arranged in any one row, all three puzzle elements are regarded as having the same attribute. Even in this case, for example, if an object assigned a “weight” element is placed over a number of puzzle elements including a puzzle element with vulnerable characteristics, it may be determined that the predetermined condition is satisfied when some (for example, two) of the number of puzzle elements are regarded as having the same attribute. Accordingly, a condition different from an initial predetermined condition may be designed in accordance with the characteristics of a puzzle element or the characteristics of an object. In the manner described above, the predetermined condition may be designed in various ways on the game program as would be understood by a person of ordinary skill in the art. This may improve gameplay and provide even a high-level player with a highly entertaining game. In exemplaryFIG.2D, the squares of all the puzzle elements arranged in the second, third, and fourth vertical columns from the left and the second and third horizontal rows from the top are regarded as having the same color (white) attribute, and may therefore satisfy a predetermined condition. In exemplary embodiments, a determination result indicating that the predetermined condition is satisfied may be displayed in any way that enables the player to identify the determination result. For example, as illustrated in exemplaryFIG.2D, a thick black solid line may be drawn in a vertical column or horizontal row for which it is determined that the predetermined condition is satisfied, or an audio representation may be used alternatively. ExemplaryFIG.2Eis a schematic diagram illustrating an object that falls in a direction away from the player's viewpoint in response to the deletion of a puzzle element for which a predetermined condition is satisfied. In exemplary embodiments, if it is determined that puzzle attribute information and/or puzzle arrangement information satisfies a predetermined condition, a puzzle element for which the predetermined condition is satisfied may be deleted from the game field, and an object displayed on or around the deleted puzzle element can be moved. In the exemplary embodiments, a puzzle element for which a predetermined condition is satisfied may be deleted in any way. As illustrated in exemplaryFIG.2E, the square of such a puzzle element may be broken step by step such that the square of such a puzzle element is cracked, or the puzzle element may be deleted at once. In the manner described above, a puzzle element may be deleted in a manner having incremental impact. This may achieve visual excitement and provide even a high-level player with a highly entertaining game. In exemplary embodiments, furthermore, the game field may have three-dimensional information concerning width, height, and length, in terms of the enhanced entertainment of the game. In exemplary embodiments, the game field may be a puzzle field which may include a number of puzzle elements and an object displayed on or around the number of puzzle elements. The game field may also include a game content and the like. The game field may have any configuration. For example, the game field may have (1) a multilayer configuration having one or more layers of puzzle fields between the puzzle field and the game content, or (2) a configuration in which the game field includes a geographic element. In one example of the configuration (1) described above, taking into account the characteristics of weapons, the player may develop a strategy to first use a hammer to break puzzle elements within a wide range in the puzzle field in the first layer to form a large opening, since a heavy-hitting weapon such as a hammer enables puzzle elements within a wide range to be broken, and then to drop a weapon, or the like, that is sharp enough to pierce puzzle elements to the puzzle field in the second layer. In one example of the configuration (2) described above, in a case where the puzzle field has irregularities or the game field is partially or entirely inclined, an object existing in the game field and having the characteristics of a weapon, or the like, may roll and fall in a predetermined direction while being shifted. In the manner described above, the game field may have a complex configuration. This may improve gameplay and provide even a high-level player with a highly entertaining game. In exemplary embodiments, in terms of the achievement of visual excitement, an object may move so that the object falls off the game field in a direction away from the player's viewpoint in response to the deletion of a puzzle element for which the predetermined condition is satisfied, and which is displayed so as to support the object. As illustrated in exemplaryFIG.2E, in response to the deletion of a puzzle element for which the predetermined condition is satisfied, an object that supports the puzzle element may also fall. ExemplaryFIG.2Fis a schematic diagram illustrating the production of a predetermined effect in game media X and Y when objects that are moved so that the objects fall and come into contact with the game media X and Y. In exemplary embodiments, when an object is moved, for example, when an object is moved so that the object falls, the object may produce, as a predetermined effect, an effect of changing status information possessed by a game content at a place to which the object has been moved. For example, as illustrated in exemplaryFIG.2F, objects that are moved so as to look to the player as if the objects are falling come into contact with game media X and Y, resulting in producing an effect of damaging the game media X and Y. When an object is moved so that the object falls, the viewpoint of the player may be changed. Specific examples of changing the viewpoint of the player include making a bird's-eye view angle shallow, moving the viewpoint in a direction away from the player, and changing the viewpoint so as to keep track of the falling object in a direction away from the player. This can facilitate visual recognition of the contact between the object and a game content. Accordingly, an effect of dynamic observation of the production of a predetermined effect may be achieved. Further, a puzzle element may be transparent while an object is moved so that the object falls. This also can facilitate visual recognition of the contact between the object and a game content. Accordingly, an effect of dynamic observation of the production of a predetermined effect may be achieved. In exemplary embodiments, elements of a card game may be incorporated into the configuration of the puzzle game described above in detail with reference to exemplaryFIGS.2A to2F, achieving the provision of a player with a highly entertaining game. ExemplaryFIGS.3A and3Bare schematic diagrams illustrating examples of a game screen obtained by incorporating elements of a card game into the puzzle game implemented by the game program. ExemplaryFIG.3Ais a schematic diagram illustrating six cards placed below the puzzle configuration screen illustrated in exemplaryFIG.2A. Each of the six cards shows a character (ally character) having characteristics capable of producing, as a predetermined effect, an effect of changing status information possessed by a game content (for example, an effect of attacking a game content to damage the game content). In exemplary embodiments, a player may be able to select each of the cards A to F illustrated in exemplaryFIG.3Ato assign specific characteristics corresponding to the character drawn on the selected one of the cards A to F to one of the six objects illustrated in exemplaryFIG.2A. The cards A to F may be selected in any way, and a selection method may be used in which a selected card is associated with each object arranged on a screen on which a number of puzzle elements are displayed. Alternatively, each object may be assigned, in advance, specific characteristics corresponding to a character drawn on a card. Further, a card may be associated with attribute information of a deleted puzzle element. For example, a deleted puzzle element that is in red may be associated with the characteristics of a card having the “fire” attribute. In the manner described above, specific characteristics associated with a card may be assigned to an object, or may also be assigned to a puzzle element. This may improve gameplay and provide even a high-level player with a highly entertaining game. Each of the six cards A to F illustrated in exemplaryFIG.3Ashows a character having characteristics. The card A shows a character holding a shotgun, the card B a character holding a ball of light, the card C a character holding a spear, the card D a character that casts lightning, the card E a character that plays a bell, and the card F a character holding a cannonball. The player may select one or more of the cards A to F to assign the characteristics of the character in the selected card to an object. In exemplary embodiments, any character having characteristics capable of producing an effect of damaging a game content (predetermined effect) may be used, and cards which show characters having a variety of sets of characteristics, other than the cards A to F, may be incorporated into the puzzle game. Examples of the characteristics of a character having the ability to inflict direct damage on a game content may include characteristics capable of continuous attacks (for example, the weapon drawn on the card A), such as a shotgun, characteristics capable of pinpoint attacks (for example, the weapon drawn on the card F), such as a cannonball, and characteristics capable of wide-range attacks (for example, the weapon drawn on the card C), such as a spear and a bow. Examples of the characteristics of a character having the ability to inflict indirect damage on a game content may include characteristics capable of attacks with sound and the like (for example, sound of the bell drawn on the card E), characteristics of illuminating the surroundings (for example, the ball of light drawn on the card B), and characteristics of causing natural disasters such as lightning and avalanches (for example, the lightning drawn on the card D). Other examples of the characteristics of a character may include characteristics capable of launching an intellectual attack, characteristics capable of attracting an enemy using meat, or the like, as a decoy, characteristics capable of launching an attack while rolling after falling, and characteristics capable of improving the visibility of the enemy position even in a dark scene. Such a variety of sets of characteristics may be used, and any of such sets of characteristics of a character may be assigned to each object. Taking into account the attribute and the like of a game content (enemy character), the player may be able to strategically select the characteristics of an optimum character from cards as described above which show characters having a variety of sets of characteristics, so as to inflict effective damage on the game content, to assign the selected characteristics to an object. In exemplary embodiments, a condition may be added in which a player is unable to damage an enemy character unless the player selects the characteristics of a character in accordance with an attribute of the enemy character. For example, a condition may be added in which an enemy character having the “iron” attribute can be defeated only by a character having the characteristics of shooting lightning. In addition, an enemy character may have a weak point, and a condition may be added in which the enemy character can be defeated only when the attribute overcomes the weak point. A condition may further be added in which the player is able to attack an enemy character on a stage including a dark scene only after the player has selected the characteristics of illuminating the surroundings. A condition may further be added in which the player is unable to delete a puzzle element if the player selects the characteristics of a nasty character. If a mirror exists in the game field, a condition may further be added in which the player is able to attack an enemy character only after the player has broken the mirror because the mirror prevents the player from distinguishing the true enemy character from false ones. In addition to the selection of a character in accordance with the attributes of the enemy character described above, the sets of characteristics of a number of characters may be strategically selected in combination to change the effect of damaging a game content and the results concerning the effect, such as the power of the effect and the range over which the effect is exerted. Specifically, a player may strategically combine characters by taking into account the job level or skill possessed by the characters and the like to change the effect of damaging a game content and the results concerning the effect, such as the power of the effect and the range over which the effect is exerted. In exemplary embodiments, furthermore, a time limit may be imposed on the player's selection of the characteristics of a character (or the selection of a card). Any method may be used to impose a time limit on the selection of the characteristics of a character. Similarly to the way described above in which a time limit is imposed on the selection of a puzzle element, an indicator of the time remaining in the time limit may be displayed at a certain location within the game field in a count-down mode, and/or a sound effect may be added to produce an effect of giving some tension to the player. ExemplaryFIG.3Bis a schematic diagram illustrating the state of a player being attacked by game media X and Y. The puzzle game, according to some exemplary embodiments, can provide (1) a mode of attack of a player on game media X and Y, and also (2) a mode of attack of game media X and Y on a player as well as one of a variety of modes of the puzzle game. The mode of attack (1) has been described with reference to exemplaryFIGS.2A to2F. The mode of attack (2) will now be described in detail with reference to exemplaryFIG.3B. In exemplaryFIG.2F, objects that have been moved so that the objects fall, come into contact with the game media X and Y, and produce an effect of damaging the game media X and Y (predetermined effect). In exemplaryFIG.3B, the mode of attack is illustrated in which, due to insufficient contact with the game media X and Y, the player is unable to sufficiently exert the effect of damaging the game media X and Y and, conversely, the game media X and Y appear closer to the viewpoint of the player with time. When the game media X and Y appear closest to the viewpoint of the player, the effect of damaging the player can be realized. ExemplaryFIG.4is an exemplary flowchart illustrating an example of a process executed by the mobile terminal100. In the following description, steps in parentheses may represent steps included in a computer control method. The input obtaining unit11may obtain instruction information5cinput by a player (step1; hereinafter step is abbreviated as S: input obtaining step). The puzzle display information output unit12may display a number of puzzle elements in a game field (S2: puzzle display information output step), and the object display information output unit13may display an object on a screen on which the number of puzzle elements are displayed (S3: object display information output step). The time restriction unit15may restrict the time during which the player can select at least one puzzle element (S4: time restriction step). If the player has successfully selected a puzzle element within the restricted time (YES in S4), the puzzle information update unit14may update puzzle attribute information associated with the selected puzzle element and/or puzzle arrangement information associated with the selected puzzle element and may indicate the arrangement of the selected puzzle element (S5: puzzle information update step). If the player has failed to select a puzzle element within the time restricted by the time restriction unit15(NO in S4), the game may end. The puzzle element condition determination unit16may determine whether or not the puzzle attribute information and/or the puzzle arrangement information satisfies a predetermined condition (S6: puzzle element condition determination step). If it is determined that the puzzle attribute information and/or the puzzle arrangement information satisfies the predetermined condition (YES in S6), the puzzle element deletion unit17may delete a puzzle element associated with the puzzle attribute information and/or the puzzle arrangement information that satisfies the predetermined condition from the game field (S7). Then, the object moving unit18may move an object displayed on or around the puzzle element deleted in the puzzle element deletion step (S8: object moving step). If the object has been moved, the effect producing unit19may produce, as a predetermined effect, an effect of changing status information possessed by a game content at a place to which the object has been moved (S9). The control method described above may include the processes described above with reference to exemplaryFIG.4, and include any process executable by the components included in the control unit10as well. Further exemplary embodiments may be described with reference to exemplaryFIG.5. In exemplary embodiments, a description will be given of only components different from those in the embodiments previously described above. All the components described above may also be included in the further embodiments (or vice versa). The same definitions of the terms in the previously described embodiments are applicable to the following embodiments. ExemplaryFIG.5is a schematic diagram illustrating a configuration of a game system300including the mobile terminal100and a server device200. As illustrated by way of example in exemplaryFIG.5, in the following description according to exemplary embodiments, the computer may serve as the server device200connected so as to be capable of communicating with the mobile terminal100via a predetermined network, and the game program may be executed by the server device200. The server device (computer)200may be an information processing device including the control unit10, which is included in the mobile terminal100in the description of the exemplary embodiments, and capable of executing a game program including some or all of the processes described. The server device200may receive instruction information (input data)5cinput by a player via the predetermined network. The server device200may output puzzle display information for causing a number of puzzle elements forming part of a puzzle to be displayed in a game field so that at least one puzzle element among the number of puzzle elements is selectable in accordance with the instruction information5c. The server device200may further output object display information for causing an object to be displayed on a screen on which the number of puzzle elements are displayed. Then, the server device200may update the puzzle attribute information associated with the selected puzzle element and/or the puzzle arrangement information associated with the selected puzzle element and indicate the arrangement of the selected puzzle element. After the puzzle attribute information and/or the puzzle arrangement information has been updated, the server device200may determine whether or not the puzzle attribute information and/or the puzzle arrangement information satisfies a predetermined condition. If it is determined that the predetermined condition is satisfied, the server device200may move an object displayed on or around the puzzle element associated with the puzzle attribute information and/or the puzzle arrangement information to produce a predetermined effect in the game. The display processing unit20in the control unit10included in the server device200may generate screen information6aconcerning a puzzle game screen which can present a result of the series of processes to the player at the desired timing, and transmits the screen information6ato the mobile terminal100. The mobile terminal100may receive a result of playing the game (for example, the screen information6a, etc.) from the server device200, and display the result on the display unit50. The result of playing the game may be displayed via a web browser. In this case, the mobile terminal100can accumulate information received from the server device200in, for example, a predetermined storage area (web storage) incorporated in the web browser. In the manner described above, the server device200may include some or all of the components (in particular, the control unit10) included in the mobile terminal100in the description of the previous exemplary embodiments, and the server device200may be configured to transmit an output result of the game to the mobile terminal100in response to the input given to the mobile terminal100. Accordingly, the server device200may achieve substantially the same advantages as those achievable by the mobile terminal100when the mobile terminal100provides the functionality. The game may be a hybrid game for which some of the processes are handled by each of the server device200and the mobile terminal100such that web display is provided for a progress screen for the game so that the progress screen is displayed on the mobile terminal100in accordance with data generated by the server device200, whereas native display may be provided for other screens such as a menu screen so that such screens are displayed using a native application installed in the mobile terminal100. The game program according to exemplary embodiments may be implemented as a native application executable by the mobile terminal100. Even in this case, the mobile terminal100may be able to access the server device200, as necessary, and to download and use information related to the progress of the game (for example, information concerning a player, information concerning another player who is a friend of the player, information concerning the cumulative points earned by the player and the items and character assigned to the player, information on the ranking of the player, etc.). In addition, the mobile terminal100and another mobile terminal may be connected so as to be capable of communicating with each other (via peer-to-peer communication such as near field wireless communication based on Bluetooth (registered trademark), or the like), and may be synchronized with each other so that the game can be played in multiplayer. As described in the above examples, the game program, the mobile terminal100(computer), and the server device200(computer) according to the exemplary embodiments may enable movement of an object displayed on or around a puzzle element whose puzzle attribute information and/or puzzle arrangement information is determined to satisfy a predetermined condition, achieving the advantage of improving the entertainment of a puzzle game. A control block (for example, the control unit10) of the mobile terminal100and the server device200may be implemented by a logic circuit (hardware) formed in an integrated circuit (integrated circuit (IC) chip) or the like, or by software using a central processing unit (CPU). In the latter case, the mobile terminal100and the server device200may each include a CPU that executes instructions of the game program which is software implementing the individual functions, a read-only memory (ROM) or storage device (referred to as a “recording medium”) on which the game program and various kinds of data are recorded in a computer (or CPU) readable manner, a random access memory (RAM) to which the game program is loaded, and so forth. The computer (or the CPU) reads the game program from the recording medium and executes the game program, thereby achieving the object of the exemplary embodiments described herein. As the recording medium, a “non-transitory tangible medium”, or “non-transitory computer readable medium”, for example, a tape, a disc, a card, a semiconductor memory, or a programmable logic circuit can be used. In addition, the game program may be provided to the computer via any given transmission media capable of transmitting the game program (such as a communication network or a broadcast wave). Exemplary embodiments can be implemented as a data signal on a carrier wave, in which the game program is embodied by electronic transmission. For example, a game program according to exemplary embodiments may cause a computer (the mobile terminal100and the server device200) to implement a puzzle display information output function, an object display information output function, a puzzle information update function, a puzzle element condition determination function, an object moving function, a puzzle element deletion function, and a time restriction function. The puzzle display information output function, the object display information output function, the puzzle information update function, the puzzle element condition determination function, the object moving function, the puzzle element deletion function, and the time restriction function can be implemented by the puzzle display information output unit12, the object display information output unit13, the puzzle information update unit14, the puzzle element condition determination unit16, the object moving unit18and the effect producing unit19, the puzzle element deletion unit17, and the time restriction unit15, described above, respectively. The details are as described above. The game program can be written in, for example, a script language such as, but not limited to, ActionScript or JavaScript (registered trademark), an object-oriented programming language such as Objective-C or Java (registered trademark), a markup language such as HyperText Markup Language 5 (HTML5), or the like. The game system300including an information processing terminal (e.g., the mobile terminal100) which includes units that implement some of functions implemented by the game program and a server (e.g., the server device200) which includes units that implement the rest of the functions different from the some functions is also within the scope of the exemplary embodiments described herein. The present invention is not limited to the embodiments described above and can be variously altered within the scope defined by the appended claims, and embodiments obtained by suitably combining technical means disclosed in different embodiments are also within the technical scope of the present invention. Further, a new technical feature can be formed by combining technical means disclosed in different embodiments. The present invention is widely applicable to any given computers such as smartphones, tablet terminals, mobile phones, home video game consoles, personal computers, server devices, workstations, or mainframes.
55,386
11857874
DETAILED DESCRIPTION A system and method for capturing and sharing console gaming data is described. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the exemplary embodiments. It is apparent to one skilled in the art, however, that embodiments can be practiced without these specific details or with an equivalent arrangement. In some instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the embodiments. Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views,FIG.1is a flowchart illustrating a method for storing gameplay according to one embodiment. At processing block110, gameplay is executed. Gameplay can be executed by the operating system of a game console in response to a user request, which can come in the form of a standard file operation with respect to a set of data associated with the desired gameplay. The request can be transmitted from an application associated with a game. The gameplay can comprise, for example, video content, audio content and/or static visual content, including wall papers, themes, “add-on” content, or any other type of content associated with a game. It is contemplated that such content can be user- or developer-generated, free or paid, full or trial, and/or for sale or for rent. At processing block120, a first portion of the gameplay is buffered, i.e., stored temporarily. For example, the previous 15 seconds, the previously completed level, or the previous action within the gameplay can be stored temporarily, as described further herein. The term “portion” used herein can correspond to any part of the gameplay that is divisible into any related or arbitrary groups of single or multiple bits or bytes of data. For example, “portions” of gameplay may correspond to levels, chapters, scenes, acts, characters, backgrounds, textures, courses, actions, songs, themes, durations, sizes, files, parts thereof, and combinations thereof. Further, portions of gameplay can comprise screenshots or prescribed durations of video capture. At processing block130, a request to capture a second portion of the gameplay is received. The request to capture the second portion of the gameplay can be a user request, which can come in the form of a standard file operation with respect to a set of data associated with the gameplay to be captured. A user can request to capture a second portion of the gameplay by, for example, selecting a button on a game controller, as described further herein. The second portion of the gameplay reflects gameplay subsequent to the first portion of the gameplay. In other words, the first portion of the gameplay reflects gameplay that occurred prior to receipt of the user request to capture the second portion of the gameplay. The second portion of the gameplay reflects gameplay that occurred after receipt of the user request to capture the second portion of the gameplay. Thus, the first portion of the gameplay is a past portion of the gameplay that has already been played, while the second portion of the gameplay begins with a current portion of the gameplay that is being executed. At processing block140, the second portion of the gameplay is captured. In one embodiment, the second portion of the gameplay is captured according to the user's request. For example, if the user taps a capture button on the game controller, a screenshot or still picture can be taken. If the user holds down a capture button on a game controller, a video can be taken for the length of time the button is being held down. In other words, if the button is held down for 5 seconds. 5 seconds of the gameplay can be captured as the second portion of gameplay; if the button is held down for 10 seconds, 10 seconds of the gameplay can be captured; and so on. In another example, a screenshot or still picture can be taken if the user holds down a capture button, and a video can be taken if the user taps a capture button twice consecutively: once to start the capture, and again to end the capture. At processing block150, the first, and second portions of the gameplay are stored. In an embodiment in which the first and second portions of the gameplay are videos, the first portion of the gameplay can be attached to the second portion of the gameplay, such that a single video without interruption is created. In one embodiment, the first and second portions of the gameplay can be stored locally on the game console in either temporary or permanent storage. Alternatively or additionally, the first, and second portions of the gameplay can be transmitted over a network and stored remotely. For example, the first and second portions of the gameplay can be transmitted over a wireless or wired network to another computing device, to another game console, or to a remote server. Such remote servers may include social media servers. Optionally, portions of the gameplay not retrieved from the buffer or portions of the gameplay outside a particular gaming interval (e.g., a particular duration, level, chapter, course, etc.) can be removed from the buffer. This removal process can be completed using standard file operations on the operating system. At optional processing block160, the first and second portions of the gameplay are displayed. The first and second portions of the gameplay can be displayed on any of a number of display devices having access to the stored gameplay. For example, the stored gameplay can be displayed on a television set connected to the game console from which the gameplay was captured. In another example, the stored gameplay can be displayed on a computer to which the stored gameplay was transmitted. The stored gameplay can be displayed alone or in conjunction with other information, such as on a social media website. In one embodiment, the first and second portions of the gameplay are displayed by another game console associated with a user other than the user that buffered or captured the gameplay. According to this embodiment, the first and second portions of the gameplay may show a ball being thrown from a first user to a second user, from the point of view of the first user. The first and second portions of gameplay can then be transmitted to the game console of the second user. Thus, the second user can then view the gameplay from the point of view of the first user. The second user can also have third and fourth portions of gameplay stored showing the ball being thrown by the first user and caught by the second user, from the point of view of the second user. In this embodiment, the second user can review the gameplay from both the point, of view of the first user and the point of view of the second user. Still further, the third and fourth portions of the gameplay can be transmitted to the game console of the first user, so that the first user may review the gameplay from two points of view. This embodiment can apply to any number of users having any number of points of view, so that gameplay can be reviewed from any number of different perspectives. With respect to storage, transmission and/or display of the first and second portions of the gameplay as described herein, it is contemplated that the first and second portions of the gameplay can be stored, transmitted and displayed as image or video data. In another embodiment, however, the first and second portions of the gameplay can be stored and transmitted as telemetry or metadata representative of the image or video data, and can be recreated as images or video by a game console or other device prior to display. In some embodiments, the first portion of the gameplay has a predetermined relationship with the executed gameplay. For example, the first portion of the gameplay can correspond to a certain amount of gameplay prior to the currently executing gameplay, such as the previous 10 seconds of gameplay. In another embodiment, the first portion of the gameplay has a predetermined relationship with the second portion of the gameplay. For example, the first portion of the gameplay can correspond to a certain amount of gameplay prior to receipt of a request to capture the second portion of gameplay, such as the 10 seconds of gameplay prior to selection of the capture button. In each of these embodiments, the amount of gameplay buffered prior to the current gameplay or the requested gameplay can be configured and adjusted by the user according to his or her particular preferences. In other embodiments, the buffer is “smart” or “elastic”, such that it captures gameplay according to variables without regard to time. In one such embodiment, the first portion of the gameplay has a predetermined relationship with an event related to the gameplay. For example, the first portion of the gameplay may be buffered to include a statistical anomaly, such as a high score being reached, the gathering of a large number of points in a short amount of time, the multiple selections of buttons on a controller, and other rare events. Such statistical anomalies can be determined by comparing gameplay metrics to average metrics for a particular game or scene, or for all games generally. Such average metrics can be stored locally or remotely for comparison. For example, a game console can track local high scores for a particular game, and buffer gameplay in which a user approaches and surpasses that high score. In another example, a remote server can track global high scores for a particular game, and can communicate that information to the game console, which buffers gameplay in which the user approaches and surpasses that high score. In another example, the first portion of the gameplay can be buffered to include an achievement, such as a trophy being attained or other landmark being reached. Such trophies or landmarks memorialize any goal or gaming achievement, such as a certain number of points being attained, a certain level being reached, and the like. For example, gameplay can be buffered to include the awarding of a trophy for reaching level 10, for reaching 100,000 points, etc. Similarly, progress toward reaching an event, in addition to the actual attainment of the trophy or statistical anomaly, can be buffered to be included in the first portion of the gameplay. For example, a screenshot can be taken at each of levels 1 through 10, creating a photo album to memorialize the receipt of a trophy for reaching level 10. In another example, a video can be taken of a user winning a race for the first through fifth times, where a trophy is awarded for 5 wins. Thus, according to the embodiment illustrated inFIG.1, at least a portion of executed gameplay can always be kept in a running buffer. In other words, when a request to capture a portion of the gameplay is received, a portion of the prior gameplay can already be captured to include previous footage. For example, if a request to capture gameplay is received after a user crosses the finish line in a racing game, the buffered gameplay can include footage of the user crossing the finish line. In other words, a user will be able to capture moments occurring before a request is made to capture the gameplay. FIG.2is a flowchart illustrating a method for embedding information such as links into stored gameplay in accordance with one embodiment. At processing block210, stored gameplay and its associated gameplay metadata is retrieved. The stored gameplay may be gameplay or portions thereof stored on any medium. In one embodiment, the stored gameplay comprises the first and second portions of gameplay discussed above with respect toFIG.1. Gameplay metadata may include, for example, the game title, game publisher, game developer, game distributor, game platform, game release date, game rating, game characters, game genre, game expansions, gameplay level or scene, length of stored gameplay, gameplay storage date, accessories used during gameplay, number of players, user ID of the user that captured the stored gameplay, user IDs of other users identified in the stored gameplay, and the like. At processing block220, relevant links are identified based on the gameplay metadata. Relevant links may be hyperlinks, for example. In one embodiment, relevant links are automatically created and generated based on the gameplay metadata. This embodiment can be implemented where websites are named according to a particular naming convention. For example, if a game's website address is assigned according to http://us.playstation.com/games-and-media/games/TITLE-OF-GAME-PLATFORM.html, where TITLE-OF-GAME is replaced with the game's title and PLATFORM is replaced with the game's platform, the method according to this embodiment could pull the title of the game and the game platform from the gameplay metadata, and insert the data into the website address to generate a link. For example, for a game entitled “Sample Game” available on the PS3, the following link could be automatically generated: http://us.playstation.com/games-and-media/games/sample-game-ps3.html. In another embodiment, relevant links are identified from a plurality of links provided by or available from the game console, the game itself, the gaming network, or a third party server. In this embodiment, relevant links can be selected based on their commonalities with the gameplay metadata. For example, relevant links could include links to the game title's store or purchase page, to the user profiles of other users identified in the stored gameplay, to an informational website about the game title, to a community website dedicated to the game title, to the user's trophy information, to downloadable content or game expansions used in the stored gameplay, to other videos of the same game title and/or game level, to other gameplay captured by the same user, to trailers of upcoming games in the same genre, to clan data, to contests, to advertisements, and the like. At processing block230, one or more of the relevant links are embedded into the stored gameplay. In one embodiment, the relevant links are graphically or textually embedded into or overlaid on the screenshot or video itself. In another embodiment, the relevant links are embedded as text accompanying the screenshot or video. At processing block240, the link-embedded gameplay is stored. In one embodiment, the link-embedded gameplay is stored locally on a game console in either temporary or permanent storage. Alternatively or additionally, the link-embedded gameplay can be transmitted over a network and stored remotely. For example, the link-embedded gameplay can be transmitted over a wireless or wired network to another computing device, to another game console, or to a remote server. Such remote servers may include social media servers. At optional processing block240, the link-embedded gameplay is displayed. The link-embedded gameplay can be displayed on any of a number of display devices having access to and capability to display the link-embedded gameplay. For example, the link-embedded gameplay can be displayed on a television set connected to the game console from which the gameplay was captured. In another example, the link-embedded gameplay can be displayed on a computer to which the stored gameplay was transmitted. The link-embedded gameplay can be displayed alone or in conjunction with other information, such as on a social media website. In one embodiment, the “sharing” of link-embedded gameplay by users can be encouraged by providing an incentive program. For example, the number of clicks of the relevant links can be tracked. In another example, where the link-embedded gameplay contains a link to a purchase website for the game, the number of game purchases can be tracked. These numbers can then be used to reward users for sharing and distributing link-embedded gameplay. In still another example where the link-embedded gameplay contains a link to a purchase website for the game, a discount on the game can be provided to those users clicking through link-embedded gameplay to encourage purchase of the game and distribution of the link-embedded gameplay. FIG.3is a flowchart illustrating a method for embedding information such as user IDs into stored gameplay in accordance with one embodiment. At processing block310, stored gameplay and gameplay metadata is retrieved. The stored gameplay may be gameplay or portions thereof stored on any medium. In one embodiment, the stored gameplay comprises the first and second portions of gameplay discussed above with respect toFIG.1. In another embodiment, the stored gameplay is the gameplay embedded with relevant links discussed above with respect toFIG.2. Gameplay metadata according to this embodiment includes at least one of the user ID of the user that captured the stored gameplay, and the user ID(s) of other user(s) present in the captured gameplay. The other user(s) present in the captured gameplay can be local users, such as a second user in a two player game connected to the same game console as the first user, or can be remote users, such as networked users connected to a different game console than the first user participating in a partially- or fully-online implemented game. At processing block320, user IDs are identified from the gameplay metadata. At processing block330, the user IDs are embedded into the stored gameplay. In one embodiment, the user IDs are graphically or textually embedded into or overlaid on the screenshot or video itself. In this embodiment, the user IDs can be embedded into or overlaid on their associated graphical representations. For example, if User_1 is represented by a red car in the stored gameplay, and User_2 is represented by a blue car in the stored gameplay, the tag “User_1” can be overlaid on or otherwise associated with the red car, and the tag “User_2” can be overlaid on or otherwise associated with the blue car. In another embodiment, the user IDs are embedded as text accompanying the screenshot or video. In the latter embodiment, the accompanying text can be text intended to be displayed, such as a description or title, or can be text intended to be invisible upon display, such as embedded gameplay metadata. It is contemplated that the accompanying text can be searchable. At processing block340, the ID-embedded gameplay is stored. In one embodiment, the ID-embedded gameplay is stored locally on a game console in either temporary or permanent storage. Alternatively or additionally, the ID-embedded gameplay can be transmitted over a network and stored remotely. For example, the ID-embedded gameplay can be transmitted over a wireless or wired network to another computing device, to another game console, or to a remote server. Such remote servers may include social media servers. At processing block350, the ID-embedded gameplay is displayed. The ID-embedded gameplay can be displayed on any of a number of display devices having access and capability to display the ID-embedded gameplay. For example, the ID-embedded gameplay can be displayed on a television set connected to the game console from which the gameplay was captured. In another example, the ID-embedded gameplay can be displayed on a computer to which the stored gameplay was transmitted. The ID-embedded gameplay can be displayed alone or in conjunction with other information, such as on a social media website. When displayed on a social media website, it is contemplated that the user tags can be compatible with the websites, such that the tags carry over to the social media website. Thus, according to the embodiment described with respect toFIG.3, the need to manually tag gameplay media with user ID's is eliminated by making the process automatic. FIG.4illustrates a system for effecting the acts of one or more of the methodologies described herein. Server410is connected over network440to a user device450. Server410includes processor420and memory430, which are in communication with one another. Server410is typically a computer system, and may be an HTTP (Hypertext Transfer Protocol) server, such as an Apache server. It is contemplated, however, that server410can be a single or multiple modules or devices hosting downloadable content or portions thereof. Further, server410can be a dedicated server, a shared server, or combinations thereof. For example, server410can be a server associated with the developer, publisher or distributor of the application460, or a third-party server, such as a peer device in a peer-to-peer (P2P) network. In addition, server410can comprise a virtual market or online shopping-based service offering the application460. In this embodiment, server410(alone or in combination with other devices) can process and perform various commercial transactions, such as billing, in addition to those acts described herein. User device450includes application460, input device465, operating system470, processor480, and memory490, which are in communication with one another. In one embodiment, user device450is a game console. In that embodiment, application460may be a game, and input device465may be a controller. Server410and user device450are characterized in that they are capable of being connected to network440. Network440can be wired or wireless, and can include a local area network (LAN), wide area network (WAN), a telephone network (such as the Public Switched Telephone Network (PSTN)), a radio network, a cellular or mobile phone network (such as GSM, GPRS, CDMA, EV-CO, EDGE, 3GSM, DECT, IS-136/TDA, iDEN, and the like), intranet, the Internet. or combinations thereof. Memory430and memory490may be any type of storage media that may be volatile or non-volatile memory that includes, for example, read-only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, zip drives, and combinations thereof. Memory430and memory490can be capable of permanent or temporary storage, or both; and can be internal, external, or both. In use, application460makes calls to operating system470to load and access data stored in memory490, using standard file operations. Application460can be any software and/or hardware that provides an interface between a user of user device450(via input device465) and operating system470. The standard file operations include, for example, “open” (i.e., specifying which file is to be accessed). “seek” (i.e., specifying what position to go to in the file to read data), “read” (i.e., requesting that data be read from the file and copied to application460), and “close” (i.e., requesting that the file be closed for now). FIG.5shows a diagrammatic representation of machine in the exemplary form of computer system500within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, as a host machine, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, a game console, a television, a CD player, a DVD player, a BD player, an e-reader, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. According to some embodiments, computer system500comprises processor550(e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), main memory560(e.g., read only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.) and/or static memory570(e.g., flash memory, static random access memory (SRAM), etc.), which communicate with each other via bus595. According to some embodiments, computer system500may further comprise video display unit510(e.g., a liquid crystal display (LCD), a light-emitting diode display (LED), an electroluminescent display (ELD), plasma display panels (PDP), an organic light-emitting diode display (OLED), a surface-conduction electron-emitted display (SED), a nanocrystal display, a 3D display, or a cathode ray tube (CRT)). According to some embodiments, computer system500also may comprise alphanumeric input device515(e.g., a keyboard), cursor control device520(e.g., a controller or mouse), disk drive unit530, signal generation device540(e.g., a speaker), and/or network interface device580. Disk drive unit530includes computer-readable medium534on which is stored one or more sets of instructions (e.g., software536) embodying any one or more of the methodologies or functions described herein. Software536may also reside, completely or at least partially, within main memory560and/or within processor550during execution thereof by computer system500, main memory560and processor550. Processor550and main memory560can also constitute computer-readable media having instructions554and564, respectively. Software536may further be transmitted or received over network590via network interface device580. While computer-readable medium534is shown in an exemplary embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the disclosed embodiments. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. It should be understood that processes and techniques described herein are not inherently related to any particular apparatus and may be implemented by any suitable combination of components. Further, various types of general purpose devices may be used in accordance with the teachings described herein. It may also prove advantageous to construct a specialized apparatus to perform the methods described herein. Those skilled in the art will appreciate that many different combinations of hardware, software, and firmware will be suitable for practicing the disclosed embodiments. Embodiments have been described in relation to particular examples, which are intended in all respects to be illustrative rather than restrictive. Further, while embodiments have been described in connection with a number of examples and implementations, it is understood that various modifications and equivalent arrangements can be made to the examples while remaining within the scope of the inventive embodiments. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. Various aspects and/or components of the described embodiments may be used singly or in any combination. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
28,036
11857875
DETAILED DESCRIPTION A system and method for capturing and sharing console gaming data is described. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the exemplary embodiments. It is apparent to one skilled in the art, however, that embodiments can be practiced without these specific details or with an equivalent arrangement. In some instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the embodiments. Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views,FIG.1is a flowchart illustrating a method for storing gameplay according to one embodiment. At processing block110, gameplay is executed. Gameplay can be executed by the operating system of a game console in response to a user request, which can come in the form of a standard file operation with respect to a set of data associated with the desired gameplay. The request can be transmitted from an application associated with a game. The gameplay can comprise, for example, video content, audio content and/or static visual content, including wall papers, themes, “add-on” content, or any other type of content associated with a game. It is contemplated that such content can be user- or developer-generated, free or paid, full or trial, and/or for sale or for rent. At processing block120, a first portion of the gameplay is buffered, i.e., stored temporarily. For example, the previous 15 seconds, the previously completed level, or the previous action within the gameplay can be stored temporarily, as described further herein. The term “portion” used herein can correspond to any part of the gameplay that is divisible into any related or arbitrary groups of single or multiple bits or bytes of data. For example, “portions” of gameplay may correspond to levels, chapters, scenes, acts, characters, backgrounds, textures, courses, actions, songs, themes, durations, sizes, files, parts thereof, and combinations thereof. Further, portions of gameplay can comprise screenshots or prescribed durations of video capture. At processing block130, a request to capture a second portion of the gameplay is received. The request to capture the second portion of the gameplay can be a user request, which can come in the form of a standard file operation with respect to a set of data associated with the gameplay to be captured. A user can request to capture a second portion of the gameplay by, for example, selecting a button on a game controller, as described further herein. The second, portion of the gameplay reflects gameplay subsequent to the first portion of the gameplay. In other words, the first portion of the gameplay reflects gameplay that occurred prior to receipt of the user request to capture the second portion of the gameplay. The second portion of the gameplay reflects gameplay that occurred after receipt of the user request to capture the second portion of the gameplay. Thus. the first portion of the gameplay is a past portion of the gameplay that has already been played, while the second portion of the gameplay begins with a current portion of the gameplay that is being executed. At processing block140, the second portion of the gameplay is captured. In one embodiment, the second portion of the gameplay is captured according to the user's request. For example, if the user taps a capture button on the game controller, a screenshot or still picture can be taken. If the user holds down a capture button on a game controller, a video can be taken for the length of time the button is being held down. In other words, if the button is held down for 5 seconds, 5 seconds of the gameplay can be captured as the second portion of gameplay; if the button is held down for 10 seconds, 10 seconds of the gameplay can be captured; and so on. In another example, a screenshot or still picture can be taken if the user holds down a capture button, and a video can be taken if the user taps a capture button twice consecutively; once to start the capture, and again to end the capture. At processing block150, the first anti second portions of the gameplay are stored. In an embodiment in which the first and second portions of the gameplay are videos, the first portion of the gameplay can be attached to the second portion of the gameplay, such that a single video without interruption is created. In one embodiment, the first and second portions of the gameplay can be stored locally on the game console in either temporary or permanent storage. Alternatively or additionally, the first and second portions of the gameplay can be transmitted over a network and stored remotely. For example, the first and second portions of the gameplay can be transmitted over a wireless or wired network to another computing device, to another game console, or to a remote server. Such remote servers may include social media servers. Optionally, portions of the gameplay not retrieved from the buffer or portions of the gameplay outside a particular gaming interval (e.g., a particular duration, level, chapter, course, etc.) can be removed from the buffer. This removal process can be completed using standard file operations on the operating system. At optional processing block160, the first and second portions of the gameplay are displayed. The first and second portions of the gameplay can be displayed on any of a number of display devices having access to the stored gameplay. For example, the stored gameplay can be displayed on a television set connected to the game console from which the gameplay was captured. In another example, the stored gameplay can be displayed on a computer to which the stored gameplay was transmitted. The stored gameplay can be displayed alone or in conjunction with other information, such as on a social media website. In one embodiment, the first and second portions of the gameplay are displayed by another game console associated with a user other than the user that buffered or captured the gameplay. According to this embodiment, the first and second portions of the gameplay may show a ball being thrown from a first user to a second user, from the point of view of the first user. The first and second portions of gameplay can then be transmitted to the game console of the second user. Thus, the second user can then view the gameplay from the point of view of the first user. The second user can also have third and fourth portions of gameplay stored showing the ball being thrown by the first user and caught by the second user, from the point of view of the second user. In this embodiment, the second user can review the gameplay from both the point of view of the first user and the point of view of the second user. Still further, the third and fourth portions of the gameplay can be transmitted to the game console of the first user, so that the first user may review the gameplay from two points of view. This embodiment can apply to any number of users having any number of points of view, so that gameplay can be reviewed from any number of different perspectives, With respect to storage, transmission and/or display of the first and second portions of the gameplay as described herein, it is contemplated that the first and second portions of the gameplay can be stored, transmitted and displayed as image or video data. In another embodiment, however, the first and second portions of the gameplay can be stored and transmitted as telemetry or metadata representative of the image or video data, and can be recreated as images or video by a game console or other device prior to display. In some embodiments, the first portion of the gameplay has a predetermined relationship with the executed gameplay. For example, the first portion of the gameplay can correspond to a certain amount of gameplay prior to the currently executing gameplay, such as the previous 10 seconds of gameplay. In another embodiment, the first portion of the gameplay has a predetermined relationship with the second portion of the gameplay. For example, the first portion of the gameplay can correspond to a certain amount of gameplay prior to receipt of a request to capture the second portion of gameplay, such as the 10 seconds of gameplay prior to selection of the capture button. In each of these embodiments, the amount of gameplay buffered prior to the current gameplay or the requested gameplay can be configured and adjusted by the user according to his or her particular preferences. In other embodiments, the buffer is “smart” or “elastic”, such that it captures gameplay according to variables without regard to time. In one such embodiment, the first portion of the gameplay has a predetermined relationship with an event related to the gameplay. For example, the first portion of the gameplay may be buffered to include a statistical anomaly, such as a high score being reached, the gathering of a large number of points in a short amount of time, the multiple selections of buttons on a controller, and other rare events. Such statistical anomalies can be determined by comparing gameplay metrics to average metrics for a particular game or scene, or for all games generally. Such average metrics can be stored locally or remotely for comparison. For example, a game console can track local high scores for a particular game, and buffer gameplay in which a user approaches and surpasses that high score. In another example, a remote server can track global high scores for a particular game, and can communicate that information to the game console, which buffers gameplay in which the user approaches and surpasses that high score. In another example, the first portion of the gameplay can be buffered to include an achievement, such as a trophy being attained or other landmark being reached. Such trophies or landmarks memorialize any goal or gaming achievement, such as a certain number of points being attained, a certain level being reached, and the like. For example, gameplay can be buffered to include the awarding of a trophy for reaching level 10, for reaching 100,000 points, etc. Similarly, progress toward reaching an event, in addition to the actual attainment of the trophy or statistical anomaly, can be buffered to be included in the first portion of the gameplay. For example, a screenshot can be taken at each of levels 1 through 10, creating a photo album to memorialize the receipt of a trophy for reaching level 10. In another example, a video can be taken of a user winning a race for the first through fifth times, where a trophy is awarded for 5 wins. Thus, according to the embodiment illustrated inFIG.1, at least a portion of executed gameplay can always be kept in a running buffer. In other words, when a request to capture a portion of the gameplay is received, a portion of the prior gameplay can already be captured to include previous footage. For example, if a request to capture gameplay is received after a user crosses the finish line in a racing game, the buffered gameplay can include footage of the user crossing the finish line. In other words, a user will be able to capture moments occurring before a request is made to capture the gameplay. FIG.2is a flowchart illustrating a method for embedding information such as links into stored gameplay in accordance with one embodiment. At processing block210, stored gameplay and its associated gameplay metadata is retrieved. The stored gameplay may be gameplay or portions thereof stored on any medium. In one embodiment, the stored gameplay comprises the first and second portions of gameplay discussed above with respect toFIG.1. Gameplay metadata may include, for example, the game title, game publisher, game developer, game distributor, game platform, game release date, game rating, game characters, game genre, game expansions, gameplay level or scene, length of stored gameplay, gameplay storage date, accessories used during gameplay, number of players, user ID of the user that captured the stored gameplay, user IDs of other users identified in the stored gameplay, and the like. At processing block220, relevant links are identified based on the gameplay metadata. Relevant links may be hyperlinks, for example. In one embodiment, relevant links are automatically created and generated based on the gameplay metadata. This embodiment can be implemented where websites are named according to a particular naming convention. For example, if a game's website address is assigned according to http://us.playstation.com/games-and-media/games/TITLE-OF-GAME-PLATFORM.html, where TITLE-OF-GAME is replaced with the game's title and PLATFORM is replaced with the game's platform, the method according to this embodiment could pull the title of the game and the game platform from the gameplay metadata, and insert the data into the website address to generate a link. For example, for a game entitled “Sample Game” available on the PS3, the following link could be automatically generated: http://us.playstation.com/games-and-media/games/sample-game-ps3.html. In another embodiment, relevant links are identified from a plurality of links provided by or available from the game console, the game itself, the gaming network, or a third party server. In this embodiment, relevant links can be selected based on their commonalities with the gameplay metadata. For example, relevant links could include links to the game title's store or purchase page, to the user profiles of other users identified in the stored gameplay, to an informational website about the game title, to a community website dedicated to the game title, to the user's trophy information, to downloadable content or game expansions used in the stored gameplay. to other videos of the same game title and/or game level, to other gameplay captured by the same user, to trailers of upcoming games in the same genre, to clan data, to contests, to advertisements, and the like. At processing block230, one or more of the relevant links are embedded into the stored gameplay. In one embodiment, the relevant links are graphically or textually embedded into or overlaid on the screenshot or video itself. In another embodiment, the relevant links are embedded as text accompanying the screenshot or video. At processing block240, the link-embedded gameplay is stored. In one embodiment, the link-embedded gameplay is stored locally on a game console in either temporary or permanent storage. Alternatively or additionally, the link-embedded gameplay can be transmitted over a network and stored remotely. For example, the link-embedded gameplay can be transmitted over a wireless or wired network to another computing device, to another game console, or to a remote server. Such remote servers may include social media servers. At optional processing block240, the link-embedded gameplay is displayed. The link-embedded gameplay can be displayed on any of a number of display devices having access to and capability to display the link-embedded gameplay. For example, the link-embedded gameplay can be displayed on a television set connected to the game console from which the gameplay was captured. In another example, the link-embedded gameplay can be displayed on a computer to which the stored gameplay was transmitted. The link-embedded gameplay can be displayed alone or in conjunction with other information, such as on a social media website. In one embodiment, the “sharing” of link-embedded gameplay by users can be encouraged by providing an incentive program. For example, the number of clicks of the relevant links can be tracked. In another example, where the link-embedded gameplay contains a link to a purchase website for the game, the number of game purchases can be tracked. These numbers can then be used to reward users for sharing and distributing link-embedded gameplay. In still another example where the link-embedded gameplay contains a link to a purchase website for the game, a discount on the game can be provided to those users clicking through link-embedded gameplay to encourage purchase of the game and distribution of the link-embedded gameplay. FIG.3is a flowchart illustrating a method for embedding information such as user IDs into stored gameplay in accordance with one embodiment. At processing block310, stored gameplay and gameplay metadata is retrieved. The stored gameplay may be gameplay or portions thereof stored on any medium. In one embodiment, the stored gameplay comprises the first and second portions of gameplay discussed above with respect toFIG.1. In another embodiment, the stored gameplay is the gameplay embedded with relevant links discussed above with respect toFIG.2. Gameplay metadata according to this embodiment includes at least one of the user ID of the user that captured the stored gameplay, and the user ID(s) of other user(s) present in the captured gameplay. The other user(s) present in the captured gameplay can be local users, such as a second user in a two player game connected to the same game console as the first user, or can be remote users, such as networked users connected to a different game console than the first user participating in a partially- or fully-online implemented game. At processing block320, user IDs are identified from the gameplay metadata. At processing block330, the user IDs are embedded into the stored gameplay. In one embodiment, the user IDs are graphically or textually embedded into or overlaid on the screenshot or video itself. In this embodiment, the user IDs can be embedded into or overlaid on their associated graphical representations. For example, if User_1is represented by a red car in the stored gameplay, and User_2is represented by a blue car in the stored gameplay, the tag “User_1” can be overlaid on or otherwise associated with the red car, and the tag “User_2” can be overlaid on or otherwise associated with the blue ear. In another embodiment, the user IDs are embedded as text accompanying the screenshot or video. In the latter embodiment, the accompanying text can be text intended to be displayed, such as a description or title, or can be text intended to be invisible upon display, such as embedded gameplay metadata. It is contemplated that the accompanying text can be searchable. At processing block340, the ID-embedded gameplay is stored. In one embodiment, the ID-embedded gameplay is stored locally on a game console in either temporary or permanent storage. Alternatively or additionally, the ID-embedded gameplay can be transmitted over a network and stored remotely. For example, the ID-embedded gameplay can be transmitted over a wireless or wired network to another computing device, to another game console, or to a remote server. Such remote servers may include social media servers. At processing block350, the ID-embedded gameplay is displayed. The ID-embedded gameplay can be displayed on any of a number of display devices having access and capability to display the ID-embedded gameplay. For example, the ID-embedded gameplay can be displayed on a television set connected to the game console from which the gameplay was captured. In another example, the ID-embedded gameplay can be displayed on a computer to which the stored gameplay was transmitted. The ID-embedded gameplay can be displayed alone or in conjunction with other information, such as on a social media website. When displayed on a social media website, it is contemplated that the user tags can be compatible with the websites, such that the tags carry over to the social media website. Thus, according to the embodiment described with respect toFIG.3, the need to manually tag gameplay media with user ID's is eliminated by making the process automatic. FIG.4illustrates a system for effecting the acts of one or more of the methodologies described herein. Server410is connected over network440to a user device450. Server410includes processor420and memory430, which are in communication with one another. Server410is typically a computer system, and may be an HTTP (Hypertext Transfer Protocol) server, such as an Apache server. It is contemplated, however, that server410can be a single or multiple modules or devices hosting downloadable content or portions thereof. Further, server410can be a dedicated server, a shared server, or combinations thereof. For example, server410can be a server associated with the developer, publisher or distributor of the application460, or a third-party server, such as a peer device in a peer-to-peer (P2P) network. In addition, server410can comprise a virtual market or online shopping-based service offering the application460. In this embodiment, server410(alone or combination with other devices) can process and perform various commercial transactions, such as billing, in addition to those acts described herein. User device450includes application460, input device465, operating system470, processor480, and memory490, which are in communication with one another. In one embodiment, user device450is a game console. In that embodiment, application460may be a game, and input device465may be a controller. Server410and user device450are characterized in that they are capable of being connected to network440. Network440can be wired or wireless, and can include a local area network (LAN), wide area network (WAN), a telephone network (such as the Public Switched Telephone Network (PSTN)), a radio network, a cellular or mobile phone network (such as GSM, GPRS, CDMA, EV-CO, EDGE, 3GSM, DECT, IS-136/TDA, iDEN, and the like), intranet, the Internet, or combinations thereof. Memory430and memory490may be any type of storage media that may be volatile or non-volatile memory that includes, for example, read-only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, zip drives, and combinations thereof. Memory430and memory490can be capable of permanent or temporary storage, or both; and can be internal, external, or both. In use, application460makes calls to operating system470to load and access data stored in memory490, using standard file operations. Application460can be any software and/or hardware that provides an interface between a user of user device450(via input device465) and operating system470. The standard file operations include, for example, “open” (i.e., specifying which file is to be accessed), “seek” (i.e., specifying what position to go to in the file to read data), “read” (i.e., requesting that data be read from the file and copied to application460), and “close” (i.e., requesting that the file be closed for now). FIG.5shows a diagrammatic representation of machine in the exemplary form of computer system500within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, as a host machine, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, a game console, a television, a CD player, a DVD player, a BD player, an e-reader or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. According to some embodiments, computer system500comprises processor550(e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), main memory560(e.g., read only memory (ROM), flash memory, dynamic random access memory (DR AM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.) and/or static memory570(e.g., flash memory, static random access memory (SRAM), etc.), which communicate with each other via bus595. According to some embodiments, computer system500may further comprise video display unit510(e.g., a liquid crystal display (LCD), a light-emitting diode display (LED), a n electroluminescent display (ELD), plasma display panels (PDP), an organic light-emitting diode display (OLED), a surface-conduction electron-emitted display (SED), a nanocrystal display, a 3D display, or a cathode ray tube (CRT)). According to some embodiments, computer system500also may comprise alphanumeric input device515(e.g., a keyboard), cursor control device520(e.g., a controller or mouse), disk drive unit530, signal generation device540(e.g., a speaker), and/or network interface device580. Disk drive unit530includes computer-readable medium534on which is stored one or more sets of instructions (e.g., software536) embodying any one or more of the methodologies or functions described herein. Software536may also reside, completely or at least partially, within main memory560and/or within processor550during execution thereof by computer system500, main memory560and processor550. Processor550and main memory560can also constitute computer-readable media having instructions554and564, respectively. Software536may further be transmitted or received over network590via network interface device580. While computer-readable medium534is shown in an exemplary embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the disclosed embodiments. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. It should be understood that processes and techniques described herein are not inherently related to any particular apparatus and may be implemented by any suitable combination of components. Further, various types of general purpose devices may be used in accordance with the teachings described herein. It may also prove advantageous to construct a specialized apparatus to perform the methods described herein. Those skilled in the art will appreciate that many different combinations of hardware, software, and firmware will be suitable for practicing the disclosed embodiments. Embodiments have been described in relation to particular examples, which are intended in all respects to be illustrative rather than restrictive. Further, while embodiments have been described in connection with a number of examples and implementations, it is understood that various modifications and equivalent arrangements can be made to the examples while remaining within the scope of the inventive embodiments. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. Various aspects and/or components of the described embodiments may be used singly or in any combination. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
28,030
11857876
DETAILED DESCRIPTION OF THE INVENTION The invention described herein relates to a system and method for providing dynamically variable maps in a video game. Exemplary System Architecture FIGS.1A and1Beach depict an exemplary architecture of a system100which may include one or more computer systems110, one or more servers150, one or more databases160, and/or other components, according to one implementation of the invention. FIG.1Aillustrates an implementation in which server(s)150function as a host computer that hosts gameplay between other devices, such as computer system(s)110. FIG.1Billustrates an implementation in which a given computer system110functions as a host computer that hosts gameplay between (or with) other devices, such as other computer system(s)110. Unless specifically stated otherwise, the description of various system components may refer to either or both ofFIGS.1A and1B. Computer System110 Computer system110may be configured as a gaming console, a handheld gaming device, a personal computer (e.g., a desktop computer, a laptop computer, etc.), a smartphone, a tablet computing device, and/or other device that can be used to interact with an instance of a video game. Referring toFIG.1B, computer system110may include one or more processors112(also interchangeably referred to herein as processors112, processor(s)112, or processor112for convenience), one or more storage devices114(which may store a map management application120), one or more peripherals140, and/or other components. Processors112may be programmed by one or more computer program instructions. For example, processors112may be programmed by map management application120and/or other instructions (such as gaming instructions used to instantiate the game). Depending on the system configuration, map management application120(or portions thereof) may be part of a game application, which creates a game instance to facilitate gameplay. Alternatively or additionally, map management application120may run on a device such as a server150. Map management application120may include instructions that program computer system110. The instructions may include, without limitation, a matchmaking engine122, a map selection engine124, a trigger detection engine128, a map management engine130, and/or other instructions132that program computer system110to perform various operations, each of which are described in greater detail herein. As used herein, for convenience, the various instructions will be described as performing an operation, when, in fact, the various instructions program the processors112(and therefore computer system110) to perform the operation. Peripherals140 Peripherals140may be used to obtain an input (e.g., direct input, measured input, etc.) from a player. Peripherals140may include, without limitation, a game controller, a gamepad, a keyboard, a mouse, an imaging device such as a camera, a motion sensing device, a light sensor, a biometric sensor, and/or other peripheral device that can obtain an input from a player. Peripherals140may be coupled to a corresponding computer system110via a wired and/or wireless connection. Server150 Server150may include one or more computing devices. Referring toFIG.1A, server150may include one or more physical processors152(also interchangeably referred to herein as processors152, processor(s)152, or processor152for convenience) programmed by computer program instructions, one or more storage devices154(which may store a map management application120), and/or other components. Processors152may be programmed by one or more computer program instructions. For example, processors152may be programmed by gaming instructions used to instantiate the game. Depending on the system configuration, map management application120(or portions thereof) may be part of a game application, which creates a game instance to facilitate gameplay. Alternatively or additionally, portions or all of map management application120may run on computer system110or server150. Map management application120may include instructions that program server150. The instructions may include, without limitation, a matchmaking engine122, a map selection engine124, a trigger detection engine128, a map management engine130, and/or other instructions132that program server150to perform various operations, each of which are described in greater detail herein. As used herein, for convenience, the various instructions will be described as performing an operation, when, in fact, the various instructions program the processors152(and therefore server150) to perform the operation. Although each is illustrated inFIGS.1A and1Bas a single component, computer system110and server150may each include a plurality of individual components (e.g., computer devices) each programmed with at least some of the functions described herein. In this manner, some components of computer system110and/or server150may perform some functions while other components may perform other functions, as would be appreciated. Thus, either or both server150and computer system100may function as a host computer programmed by map management application120. The one or more processors (112,152) may each include one or more physical processors that are programmed by computer program instructions. The various instructions described herein are exemplary only. Other configurations and numbers of instructions may be used, so long as the processor(s) (112,152) are programmed to perform the functions described herein. Furthermore, it should be appreciated that although the various instructions are illustrated inFIG.1as being co-located within a single processing unit, in implementations in which processor(s) (112,152) includes multiple processing units, one or more instructions may be executed remotely from the other instructions. The description of the functionality provided by the different instructions described herein is for illustrative purposes, and is not intended to be limiting, as any of instructions may provide more or less functionality than is described. For example, one or more of the instructions may be eliminated, and some or all of its functionality may be provided by other ones of the instructions. As another example, processor(s) (112,152) may be programmed by one or more additional instructions that may perform some or all of the functionality attributed herein to one of the instructions. Storage Devices114 The various instructions described herein may be stored in one or more storage devices, such as storage device (114,154), which may comprise random access memory (RAM), read only memory (ROM), and/or other memory. The storage device may store the computer program instructions (e.g., the aforementioned instructions) to be executed by processor (112,152) as well as data that may be manipulated by processor (112,152). The storage device may comprise floppy disks, hard disks, optical disks, tapes, or other storage media for storing computer-executable instructions and/or data. Network102 The various components illustrated inFIG.1may be coupled to at least one other component via a network102, which may include any one or more of, for instance, the Internet, an intranet, a PAN (Personal Area Network), a LAN (Local Area Network), a WAN (Wide Area Network), a SAN (Storage Area Network), a MAN (Metropolitan Area Network), a wireless network, a cellular communications network, a Public Switched Telephone Network, and/or other network. InFIG.1, as well as in other drawing Figures, different numbers of entities than those depicted may be used. Furthermore, according to various implementations, the components described herein may be implemented in hardware and/or software that configure hardware. Databases160 The various databases160described herein may be, include, or interface to, for example, an Oracle™ relational database sold commercially by Oracle Corporation. Other databases, such as Informix™, DB2 (Database 2) or other data storage, including file-based, or query formats, platforms, or resources such as OLAP (On Line Analytical Processing), SQL (Structured Query Language), a SAN (storage area network), Microsoft Access™ or others may also be used, incorporated, or accessed. The database may comprise one or more such databases that reside in one or more physical devices and in one or more physical locations. The database may store a plurality of types of data and/or files and associated data or file descriptions, administrative information, or any other data. The foregoing system architecture is exemplary only and should not be viewed as limiting. Other system configurations may be used as well, as would be appreciated by those having skill in the art. Exemplary Multiplayer System Configurations As noted above, a multiplayer video game is a video game in which two or more players play in a gameplay session in a cooperative or adversarial relationship. Multiplayer video games have exploded in popularity due, in part, to services such as Microsoft's Xbox LIVE® and Sony's PlayStation Network® which enable gamers all over the world to play with or against one another. Typically, when a player logs in to a game system or platform to play a multiplayer video game, the player may engage in a gameplay session in which he or she is matched with other players to play together (on the same team or as opponents). FIG.2Aillustrates an exemplary system configuration200A in which a server hosts a plurality of computer devices to facilitate a multiplayer game, according to an implementation of the invention. In one implementation, one or more servers150may host a number of computer systems110(illustrated as computer systems110A,110B, . . . ,110N) via network102. Each computer system110may include one or more peripherals (illustrated as peripherals140A,140B, . . . ,140N). In this manner, one or more servers150may facilitate the gameplay of different players using different computer systems110and/or otherwise provide one or more operations of map management application120(illustrated inFIG.1). In some instances, a given server150may be associated with a proprietary gameplay network system, such as, without limitation, Microsoft's Xbox LIVE® and Sony's PlayStation Network®, and/or another type of gameplay network system. In this implementation, a given computer system110may be associated with a particular type of gaming console. Other types of computer systems110using other types of gameplay networks may be used as well. FIG.2Billustrates an exemplary system configuration200B in which a plurality of computer systems110are networked together to facilitate a multiplayer game, according to an implementation of the invention. Any one or more of the computer devices110may serve as a host and/or otherwise provide one or more operations of map management application120(illustrated inFIG.1). FIG.2Cillustrates an exemplary system configuration200C in which a computer system110is used by a plurality of users to facilitate a multiplayer game, according to an implementation of the invention. In an implementation, computer system110may be considered to host the multiplayer game and/or otherwise provide one or more operations of map management application120(illustrated inFIG.1). Referring toFIGS.2A-2C, in an implementation, a host may facilitate the multiplayer game and/or perform other operations described herein. In an implementation, at least some of these operations may also or instead be performed by an individual computer system110. Furthermore, the illustrated system configurations are exemplary only and should not be viewed as limiting in any way. Other system configurations may be used as well, as would be appreciated by those having skill in the art. While aspects of the invention may be described with reference to multiplayer video games, it should be recognized that the features and functionality described herein are equally applicable to a single player video game. Generating Matches According to an aspect of the invention, matchmaking engine122may identify one or more players that are waiting to be matched, such as players whose in-game avatars are waiting in a virtual game lobby to join a gameplay session. The gameplay session may comprise any type of gameplay session including, without limitation, a real gameplay session and/or a practice gameplay session (e.g., associated with a “practice” or “training” mode of a game). In one implementation, a player may be added to a gameplay session immediately if there is an opening. In another implementation, one or more gameplay sessions may be dynamically combined to create a single gameplay session involving the aggregate of all players in each of the original gameplay sessions. A gameplay session may be dynamically split to create two or more gameplay sessions, where a matchmaking engine may determine which players from the original sessions are grouped and placed into the resulting two or more gameplay sessions. In one implementation, matchmaking engine122may generate one or more matches by grouping two or more of the identified players. The number of players placed in each match may depend on a number of players waiting to be matched, a number of players needed for a game session (e.g., a number of players needed to form a team or start a match), a number of players that can be accommodated by a game session, and/or other information. Different matches may include different combinations of different players, which may include different numbers of players. Matchmaking engine122may use known or hereafter-developed matchmaking techniques to generate a match (e.g., interchangeably referred to herein as “matchmaking”) by grouping players in an effort to produce the most satisfying player experiences. Game profiles, player profiles, match variables, and other factors may be considered when generating matches. Exemplary Map According to one implementation of the invention, map selection engine124may select, generate, or otherwise obtain a map for a match of a gameplay session.FIG.3depicts an exemplary illustration of a map300that may be utilized in a gameplay session. Map300may comprise one or more map features (or attributes) including, for example, a map boundary (or perimeter)304, one or more static map objects306, and one or more dynamic map objects308. Map boundary304may define (in whole or in part) an area of playable space302available to one or more game players during a gameplay session. As described in greater detail below, map boundary304may be scalable (e.g., may expand or contract) or be otherwise altered during a gameplay session to change the area of playable space302. Examples of static map objects306may include, without limitation, objects that are typically stationary such as a building, a wall, furniture, a tree, a large boulder, a body of water, a mountain, etc. The type of static map objects306presented on map300may of course differ depending on the type of video game. Dynamic map objects308are objects that may be movable from one position to another, or from one state to another. For instance, a vehicle (e.g., a race car, truck, spaceship, etc.) may comprise a dynamic map object308. A door or drawbridge that is capable of being moved from an open position (or state) to a closed position (or state), or a river whose water level changes to make it passable or impassable, etc. may also comprise a dynamic map object308. The type of dynamic map objects308presented on map300may differ depending on the type of video game. In some instances, a dynamic map object may be moved or manipulated to change the area of (or otherwise alter aspects of) playable space302(as described in greater detail below). In certain implementations, some static map objects306may be considered dynamic map objects if they are capable of being (or are) moved or manipulated during gameplay. For example, a large boulder may comprise a static map object306. However, the large boulder may also be considered a dynamic map object308if it is capable of being (or is) moved or manipulated by one or more characters or equipment during a gameplay session. According to an aspect of the invention, one or more of boundary304, static map object(s)306, and/or dynamic map object(s)308may collectively comprise a configuration of playable space302available to players during a gameplay session. The configuration of available playable space302may therefore be altered during a gameplay session by changes to boundary304, and/or the location, position, size, number, state, etc. of one or more of static map object(s)306, and/or dynamic map object(s)308. As one non-limiting example, map300may comprise one or more regions310(e.g., region A, region B, region C, etc.). Map300may comprise a floor plan of a building, regions A-C may comprise separate rooms, dynamic map objects308may comprise doors, and static map objects306may comprise pieces of furniture. The total area of playable space302may comprise rooms A, B, and C if all of doors308are open, or are unlocked and capable of being opened. By contrast, doors308may be locked between rooms A and B, or rooms B and C. Accordingly, floor plan300may be dynamically configured or modified such that total area of playable space302comprises room A, room B, room C, rooms A and B, rooms B and C, or rooms A, B, and C. As yet another example, one or more pieces of furniture (or static map objects)306may be moved into a position to block an open door308such that the same effect is achieved as if door308were closed or locked. As the foregoing clearly demonstrates, various configurations of playable space302may be achieved by dynamically modifying a single map300. In one implementation, as described in greater detail below, the configuration of the playable space302may be altered during a gameplay session by changes to boundary304, and/or the location, position, size, number, state, etc. of one or more of static map object(s)306, and/or dynamic map object(s)308based on trigger events that occur during gameplay. Map Selection or Generation—Initial Configuration As noted above, map selection engine124may select, generate, or otherwise obtain a map for a match of a gameplay session. For example, in some implementations, map selection engine124may select and retrieve one or more maps from among a collection of pre-generated maps stored, for instance, in database160. Alternatively, map selection engine124may generate one or more maps, or dynamically modify one or more existing maps, in real-time (“on the fly”) for a gameplay session to change the playable space by altering one or more of the map's boundary, static map object(s), and/or dynamic map object(s), as described above. In some implementations, a map (whether selected, generated, or modified) may have an initial (or first or beginning) configuration based on gameplay session information. Gameplay session information may describe various game characteristics of a gameplay session that may influence the quality of gameplay. For example, gameplay session information may include, without limitation, a number of players, a composition of teams (e.g., number and/or types of roles in each team), duration of gameplay (e.g., how long a given gameplay session is expected to last), types of matches (e.g., team death match, capture the flag, etc.), and/or other information related to a gameplay session. In another implementation, a map may be selected for a match randomly. In other implementations, one or more players may select the map to be played in a match of the gameplay session. For instance, before the start of a match, one or more players may vote on the map to be used during the gameplay session. Trigger Events & Trigger Event Detection During Gameplay According to an aspect of the invention, once a gameplay session has commenced, gameplay may be monitored in real-time for the detection of a trigger event (e.g., by trigger detection engine128, or other game logic) that may cause the map to be dynamically modified (from its initial configuration) in order to improve the gameplay experience. Examples of trigger events may include, but are not limited to, a change in a number of players in the gameplay session (e.g., the number of players exceeds or falls below a predetermined threshold), a change in a number of game players playing a particular player role (e.g., a number of a certain type of player roles in a match exceeds or falls below a predetermined threshold), the pace or frequency of gameplay actions/events exceeding or falling below a predetermined threshold, the commencement of a competition or newly available mission that takes place in a map, an inference that one or more players are unhappy with the current configuration of a map or otherwise would prefer variety (e.g., by monitoring unexpected attrition/rage quitting, or through explicit in-game voting or other feedback), or a change in other gameplay information, among other examples. As a non-limiting example, matchmaking engine122may match one or more players into a map being used in a current gameplay session. The addition of the one or more players may comprise the trigger event that results in the dynamic modification of the map. According to an aspect of the invention, trigger events may be system-defined (e.g., defined by the game logic) or user-defined (e.g., through one or more user interfaces prior to the commencement of a gameplay session). It should be appreciated that trigger events may be different for different maps, different video games, and/or for different maps utilized in the same video game. In some implementations, a collection of defined trigger events may be accessed and selectively applied to individual maps. In other implementations, trigger events may be created or customized for particular maps. Various configurations may be implemented. According to an aspect of the invention, when a trigger event is detected during gameplay (e.g., by trigger detection engine128, or other game logic), a map may be dynamically modified (from its initial configuration) as described in detail below. In some implementations, depending on the type of trigger event, detection of the trigger event alone may be sufficient to dynamically modify the map. In other implementations, the gameplay event or action that produced (or resulted in) the triggering event must persist for a predetermined period of time (e.g., a modification waiting period) before the map is dynamically modified. This avoids changing the map frequently when near trigger thresholds, since players may find this confusing or disruptive depending on the game or map design. In some instances, two or more trigger events may occur during gameplay (and be detected) at substantially the same time. For example, both a number of players and a number of player roles of a certain type may exceed a predetermined threshold at substantially the same time. In such an instance, either or both of the detected trigger events may result in a dynamic modification of the map. For example, in one implementation, the most significant trigger event, as defined by game logic or a user, may be used to dynamically modify a configuration of the map. In some implementations, the occurrence of multiple trigger events may reduce the waiting period to dynamically modify the map. For example, if one or more additional trigger events occur during the modification waiting period, the modification waiting period may be truncated. In other implementations, if two or more significant trigger events are detected within a predetermined (e.g., short) period of time, the dynamic map modification may occur immediately without a modification waiting period. Other configurations may be implemented. Dynamic Map Modification According to an aspect of the invention, when trigger detection engine128detects a trigger event, map management engine130may dynamically modify a configuration of the map to improve the gameplay experience based on the type of trigger event. Dynamic modification of a map may comprise any one or more of the following:altering the boundary (or perimeter) of the map by, for example, increasing or decreasing the boundary such that the boundary respectively defines a larger or smaller area of playable space, and/or opening up or closing (or otherwise altering) one or more portions of the boundary;altering the location, position, size, number, state, etc. of one or more static map objects on the map;altering the location, position, size, number, state, etc. of one or more dynamic map objects on the map;scaling the entire map by increasing or decreasing the size of the map and its constituent objects (including any static map objects, dynamic map objects, virtual characters or avatars depicting players, etc.) to increase or decrease the area of available space, respectively;adding or removing non-player characters (NPCs) or other artificial intelligence (AI) controlled avatars to the gameplay experience;combining all or a portion of the map with all or a portion of one or more additional maps; and/ormodifying the attributes of existing map objects or terrain such that player interaction is fundamentally impacted. Examples may include making a river passable that was formerly impassable, or converting molten lava into cooling rock that can now be traversed without damaging a player's avatar. Regions of a map may also be modified (e.g., filled with water, lava, quick-sand, poisonous gas, poisonous swamps, etc.) to reduce or otherwise alter the playable space of the map without altering the boundary of the map. As a result of the dynamic modification of the map, the map may transform from its initial (or first or beginning) configuration to a modified (or new or second) configuration. Further, each detected trigger event that occurs during a gameplay session may cause map management engine130to dynamically modify a most recent (e.g, second) configuration of the map to a further modified (or new or third) configuration. The following are illustrative and non-limiting examples of the various ways in which a map may be dynamically modified in real-time during gameplay in response to certain trigger events. Number of Players In one implementation, a map be dynamically modified in real-time, during gameplay, based on a trigger event associated with a change in a number of players in the gameplay session (e.g., the number of players exceeds or falls below a predetermined threshold). As one example, if a number of players during a gameplay session falls below a predetermined number (e.g., a lower or first threshold), map management engine130may switch the state of one or more dynamic map objects (e.g., close a doorway, block a hallway, remove a bridge, etc.) of the map to selectively close off regions of the map, thereby decreasing the available playable space of the map. In this regard, the remaining players may be forced to play in a smaller area which may, depending on the nature of the game, increase encounters with other players to foster more exciting action and gameplay. In some implementations, when a region of a map is selectively closed off (or otherwise dynamically altered), player avatars may be transported out of the non-playable area to another area of the map (e.g., to a standard safe spawn site). Alternatively, player avatars may be spawned elsewhere after a death (or other game event), and the region of the map to be closed may be closed once no more player avatars are in the region. Conversely, if a number of players during a gameplay session exceeds a predetermined number (e.g., a higher or second threshold number), map management engine130may switch the state of one or more dynamic map objects (e.g., open a doorway, unblock a hallway, add/open a bridge, etc.) of the map to selectively open up additional regions of the map, thereby increasing the available playable space of the map. An example is illustrated inFIGS.4A-4C. In particular,FIG.4Adepicts a map400A in an initial configuration for a gameplay session of a multiplayer video game involving 16 players. As shown, map400A includes, as playable space, regions A and B as dynamic map objects408are in an open state. During gameplay, upon detection that the number of players in the gameplay session has decreased from 16 players to a number equal to or below a first (or lower) pre-determined threshold number (e.g., 8 players), map management engine130may, as a result of the triggering event, switch dynamic map objects408to a closed state (or remove them altogether), thereby reducing the available playable space of the map to comprise only Region B as shown in map400B ofFIG.4B(in a second configuration of the map). By contrast, during gameplay, upon detection that the number of players in the gameplay session has increased from 16 players to a number equal to or above a second (or upper) pre-determined threshold number (e.g., 20 players), map management engine130may, as a result of the triggering event, switch dynamic map objects408to an open state (and/or add new dynamic map objects), thereby increasing the available playable space of the map to comprise Regions A, B, and C as shown in map400C ofFIG.4C(in a second configuration of the map). In this regard, the map may by dynamically modified in real-time during a gameplay session such that various configurations of the map (such as those illustrated inFIGS.4A,4B, &4C) may be made available to players based on trigger events that occur during gameplay. In one implementation, the gameplay session (which players may join or leave in progress) may comprise an unbounded gameplay session such as that disclosed in co-pending, and concurrently filed, U.S. patent application Ser. No. 14/712,387, entitled “System and Method for Providing Continuous Gameplay in a Multiplayer Video Game Through an Unbounded Gameplay Session”, which is hereby incorporated by reference herein in its entirety. Types of Player Roles In one implementation, a map may be dynamically modified in real-time, during gameplay, based on a trigger event associated with a change in a number of game players playing a particular player role. Player roles may, of course, differ based on the particular video game. As a non-limiting example, a player role in a First-Person-Shooter game may comprise that of a sniper. During a gameplay session, if a number of players in the sniper role decreases to a number equal to or below a first (or lower) pre-determined threshold number, map management engine130may, as a result of the triggering event, add or provide ladders to (newly added or existing) sniper perches to the map or improve long-distance sight lines by removing occluding objects in order to incentivize players switch to a sniper role to provide more balanced gameplay. In another example, the map may shrink or remove various map-based sniper advantages based on an inference that current players favor close-quarters gameplay. By contrast, if a number of players in the sniper role increases during a gameplay session to a number equal to or above a second (or higher) pre-determined threshold number, map management engine130may, as a result of the triggering event, remove ladders and/or remove sniper perches and/or add occluding objects which reduce sight lines from the map in order to deter players from selecting the sniper role. The types of static and/or dynamic map objects that may be added to or removed from (or be otherwise altered on) a map may differ based on the type and nature of various player roles in various video games. Pace or Frequency of Gameplay Actions/Events In one implementation, a map may be dynamically modified in real-time, during gameplay, based on a trigger event associated with the pace or frequency of certain gameplay actions or events. Returning back to the non-limiting example of a First-Person-Shooter game, excitement during gameplay may, for example, be based on the frequency of the occurrence of a particular event such as a firefight. As such, during a gameplay session, if the frequency of firefights decreases to a value equal to or below a first (or lower) pre-determined threshold value, map management engine130may, as a result of the triggering event, alter the area of playable space on the map by altering one or more of the map's boundary, static map object(s), and/or dynamic map object(s) to provide more opportunities for firefights and increase the pace of play. By contrast, if the frequency of firefights increases during a gameplay session to a value equal to or above a second (or upper) pre-determined threshold value, map management engine130may, as a result of the triggering event, alter the area of playable space on the map by altering one or more of the map's boundary, static map object(s), and/or dynamic map object(s) to reduce the number of firefights and slow down the pace of play. The various types of game actions or events that may be used as a triggering event may, of course, differ based on the particular video game. Gameplay State Information In addition to the foregoing examples, a map may be dynamically modified in real-time, during gameplay, based on a trigger event associated with changes in other gameplay state information including, without limitation, types of matches (e.g., team death match, capture the flag, etc.), elapsed time or remaining time in a gameplay session, and/or other information related to a gameplay session. For example, in some implementations, if the elapsed time of a gameplay session reaches a predetermined threshold, a map may be dynamically modified in any one or more of the manners described herein for variety. Numerous configurations may be implemented. Exemplary Flowchart FIG.5depicts an exemplary flowchart500of processing operations for providing dynamically variable maps in a video game, according to an aspect of the invention. The various processing operations and/or data flows depicted inFIG.5are described in greater detail herein. The described operations may be accomplished using some or all of the system components described in detail above and, in some implementations, various operations may be performed in different sequences and various operations may be omitted. Additional operations may be performed along with some or all of the operations shown in the depicted flow diagrams. One or more operations may be performed simultaneously. Accordingly, the operations as illustrated (and described in greater detail below) are exemplary by nature and, as such, should not be viewed as limiting. In an operation502, one or more trigger events may be defined. Trigger events may be system-defined (e.g., defined by the game logic) or user-defined (e.g., through one or more user interfaces prior to the commencement of a gameplay session). It should be appreciated that trigger events may be different for different maps, different video games, and/or for different maps utilized in the same video game. Examples of trigger events may include, but are not limited to, a change in a number of players in the gameplay session (e.g., the number of players exceeds or falls below a predetermined threshold), a change in a number of game players playing a particular player role (e.g., a number of a certain type of player roles in a match exceeds or falls below a predetermined threshold), the pace or frequency of gameplay actions/events exceeding or falling below a predetermined threshold, the commencement of a competition or newly available mission that takes place in a map, an inference that one or more players are unhappy with the current configuration of a map or otherwise would prefer variety (e.g., by monitoring unexpected attrition/rage quitting, or through explicit in-game voting or other feedback), or a change in other gameplay information, among other examples. In an operation504, one or more players, such as players whose in-game avatars are waiting in a virtual game lobby to join a gameplay session, may be matched. In one implementation, a matching engine may use known or hereafter-developed matchmaking techniques to generate a match by grouping players in an effort to produce the most satisfying player experiences. Game profiles, player profiles, match variables, and other factors may be considered when generating matches. In an operation506, a map selection engine may select, generate, or otherwise obtain a map for a match of a gameplay session. In some implementations, one or more maps may be selected and retrieved from among a collection of pre-generated maps stored, for instance, in one or more databases. Alternatively, the map selection engine may generate one or more maps, or dynamically modify one or more existing maps, in real-time (“on the fly”) for a gameplay session to change the playable space by altering one or more of the map's boundary, static map object(s), and/or dynamic map object(s). In some implementations, a map (whether selected, generated, or modified) may have an initial (or first or beginning) configuration based on gameplay session information. Gameplay session information may describe various game characteristics of a gameplay session that may influence the quality of gameplay. For example, gameplay session information may include, without limitation, a number of players, a composition of teams (e.g., number and/or types of roles in each team), duration of gameplay (e.g., how long a given gameplay session is expected to last), types of matches (e.g., team death match, capture the flag, etc.), and/or other information related to a gameplay session. In another implementation, a map may be selected for a match randomly. In yet other implementations, one or more players may select the map to be played in a match of the gameplay session. For instance, before the start of a match, one or more players may vote on the map to be used during the gameplay session. In an operation508, a gameplay session may commence. The gameplay session may comprise any type of gameplay session including, without limitation, a real gameplay session and/or a practice gameplay session (e.g., associated with a “practice” or “training” mode of a game). In an operation510, gameplay may be monitored in real-time for the detection of a trigger event (e.g., by a trigger detection engine, or other game logic). If no trigger event is detected in operation510, a determination may be made as to whether the gameplay session should continue. If so, processing may resume at operation508. If not, the gameplay session may terminate in an operation516. If a trigger event is detected in operation510, the map (provided in operation506) may be dynamically modified (from its initial configuration) in an operation512. In operation512, a map management engine may dynamically modify a configuration of the map to improve the gameplay experience based on the type of trigger event. Dynamic modification of a map may comprise any one or more of: altering the boundary (or perimeter) of the map by, for example, increasing or decreasing the boundary such that the boundary respectively defines a larger or smaller area of playable space, and/or opening up or closing (or otherwise altering) one or more portions of the boundary; altering the location, position, size, number, state, etc. of one or more static map objects on the map; altering the location, position, size, number, state, etc. of one or more dynamic map objects on the map; scaling the entire map by increasing or decreasing the size of the map and its constituent objects (including any static map objects, dynamic map objects, virtual characters or avatars depicting players, etc.) to increase or decrease the area of available space, respectively; adding or removing non-player characters (NPC) or other artificial intelligence (AI) controlled avatars to the gameplay experience; combining all or a portion of the map with all or a portion of one or more additional maps; and/or modifying the attributes of existing map objects or terrain such that player interaction is fundamentally impacted. As a result of the dynamic modification of the map, the map may transform from its initial (or first or beginning) configuration to a modified (or new or second) configuration. Gameplay may then continue in operation508. Other implementations, uses and advantages of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. The specification should be considered exemplary only, and the scope of the invention is accordingly intended to be limited only by the following claims.
40,880
11857877
DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the implementations. It will be apparent, however, to one skilled in the art that the implementations may be practiced without these specific details.I. OverviewII. ArchitectureIII. Example Game Application Graphical User InterfacesIV. Automatic In-Game Subtitle Generation Process I. Overview An approach is provided for a gaming overlay application to provide automatic in-game subtitles and/or closed captions for video game applications. The overlay application accesses an audio stream and a video stream generated by an executing game application. In implementations, the video stream comprises frames of image data that are rendered during the executing of the game application. The overlay application processes the audio stream through a text conversion engine, which, in implementations, includes a speech-to-text engine, to generate at least one subtitle. The overlay application determines a display position to associate with the at least one subtitle. The overlay application generates a subtitle overlay comprising the at least one subtitle located at the associated display position. The overlay application causes at least a portion of the video stream to be displayed with the subtitle overlay. Techniques discussed herein enable a gaming overlay application to analyze real-time audio streams from a video game to generate subtitles to be displayed, even when the video game does not natively support subtitles. By using various cues such as multi-channel surround sound information and machine learning based voice profile matching, dialogue and audio cues are associated with specific characters, multiplayer users, or other elements shown in-game, and subtitles are positioned onscreen at a user preferred location or in proximity to the associated sound source. In this manner, a user quickly identifies a speaker and their associated dialogue even if audio is difficult to hear or muted. This enables the user to react more quickly and efficiently by understanding and reacting to audio cues even with hearing impediments or challenging listening environments. Further, since the techniques are applicable to any video game that generates audio, the described techniques can be used with video games that do not natively support subtitles. In implementations, subtitles are shown in a variety of contexts, including cut scenes, in matching lobbies or during gameplay. II. Architecture FIG.1is a block diagram that depicts a system100for implementing automatic in-game subtitles, as described herein. Subtitles, as used in the present disclosure, include transcriptions or translations of dialogue or speech of a video, video game, etc. and descriptions of sound effects, musical cues or other relevant audio information from the video/video game. Thus, references to subtitles also include closed captions or subtitles with additional context such as speaker identification and non-speech elements such as descriptions of sound effects and audio cues. In implementations, system100includes computing device110, network160, input/output (I/O) devices170, and display180. In implementations, computing device110includes processor120, graphics processing unit (GPU)122, data bus124, and memory130. In implementations, GPU122includes memory for storing one or more frame buffers123. In implementations, memory130stores game application140and gaming overlay application150. In implementations, game application140outputs audio stream142and video stream144. Gaming overlay application150includes text conversion engine152, subtitle compositor154, voice profile database156, and user preferences158. I/O devices170include microphone172and speakers174. Display180includes an interface to receive game graphics182from computing device110. In implementations, game graphics182includes subtitle overlay190. The components of system100are only exemplary and any configuration of system100is usable according to the requirements of game application140. Game application140is executed on computing device110by one or more of processor120, GPU122, or other computing resources not specifically depicted. Processor120is any type of general-purpose single or multi core processor, or a specialized processor such as application-specific integrated circuit (ASIC) or field programmable gate array (FPGA). In implementations, more than one processor120is present. GPU122is any type of specialized hardware for graphics processing, which is addressable using various graphics application programming interfaces (APIs) such as DirectX, Vulkan, OpenGL, and OpenCL. In implementations, GPU122includes frame buffers123, where finalized video frames are stored before outputting to display180. Data bus124is any high-speed interconnect for communications between components of computing device110, such as a Peripheral Component Interconnect (PCI) Express bus, an Infinity Fabric, or an Infinity Architecture. Memory130is any type of memory, such as a random access memory (RAM) or other storage device. As depicted inFIG.1, game application140generates audio stream142and video stream144, corresponding to real-time audio and video content. In some implementations, audio stream142and video stream144are combined into a single audiovisual stream. Audio stream142corresponds to internally generated in-game audio and in implementations includes multiple channels for surround sound and/or 3D positional audio information. In implementations, game application140supports multiplayer gaming via network160. In implementations, voice chat streams from game participants are embedded in audio stream142, either combined with existing in-game audio or as separate channels to be mixed by the operating system. For example, microphone172is used to record voice chat from participants. While gaming overlay application150is depicted as receiving audio stream142from game application140, in implementations, audio stream142is received from an audio mixer output provided by an operating system of computing device110. In implementations, video stream144corresponds to in-game visuals which are generated by GPU122and exposed for access via a video capture service provided by GPU122. For example, completed frame buffers123are buffered in memory130for access by a video streaming application. For simplicity, gaming overlay application150is depicted as accessing video stream144from game application140. In implementations, gaming overlay application150corresponds to any program that includes functionality to display an overlay on top of in-game video content. This includes programs provided by the manufacturer of GPU122, such as Radeon Software Crimson ReLive Edition or GeForce Experience, gaming clients such as Steam with Steam Overlay, voice chat tools such as Discord, or operating system features such as Windows Xbox Game Bar. In implementations, gaming overlay application allows the user to enable options, such as displaying in-game overlay for configuring video capture, video streaming, audio mixing, voice chat, game profile settings, friend lists, and other options. In implementations, gaming overlay application150includes functionality for video and audio capture and streaming. In implementations, this functionality is utilized to capture audio stream142and video stream144from game application140. In implementations, gaming overlay application150is further extended to support automatic in-game subtitles by implementing or accessing text conversion engine152and subtitle compositor154. In implementations, text conversion engine152accesses audio stream142and generates text corresponding to detected speech or sound effects. For example, text conversion engine152includes a speech-to-text engine and a video game sound effect detection engine. Example speech-to-text engines include DeepSpeech, Wav2Letter++, OpenSeq2Seq, Vosk, and ESPnet. By using alternative models that are trained with video game sound effects and other non-dialogue audio cues, the speech-to-text engines are also adaptable for use as video game sound effect detection engines. In implementations, to provide real-time or near real-time processing, audio stream142is loaded into buffers of a limited size for processing through text conversion engine152. For example, the buffers are capped at a maximum size or length, such as no longer than 5 seconds, and buffers are split opportunistically according to pauses or breaks detected in audio stream142. In this manner, dialogue is processed in buffers containing short dialogue phrases and processed for displaying as quickly as possible. In implementations, once subtitle text is obtained from text conversion engine152, subtitle compositor154determines display positions associated with the subtitles. For example, in implementations, user preferences158define a preferred area of the screen for displaying subtitles, such as near the bottom of the screen. In implementations, video stream144is scanned for user interface elements of game application140, such as health indicators or other in-game indicators that are preferably kept unobscured, and these areas are marked as exclusion areas or keep-out zones that should not display subtitles. For example, computer vision models are used to detect common videogame user interface elements such as health indicators, mini maps, compasses, quest arrows, ammunition and resource counters, ranking or score information, timers or clocks, and other heads-up display (HUD) elements. In implementations, subtitle compositor154positions the subtitles in proximity to an in-game object associated with the in-game speaker, as described in conjunction withFIG.2Cbelow. In implementations, to determine the identity of the in-game speaker, voices detected in audio stream142are matched to machine learned classifications stored in voice profile database156. In implementations, spatial audio cues from audio stream142are utilized to triangulate a position of an in-game object associated with the in-game speaker. While text conversion engine152and voice profile database156are shown as integral to gaming overlay application150, in implementations, components of gaming overlay application150are implemented by a remote service (e.g., cloud server) that is accessed via network160. This enables offloading of various tasks, such as text conversion, foreign language translation, and/or machine learning matching tasks to external cloud services. After subtitle compositor154determines a display position for the subtitles generated from text conversion engine152, a subtitle overlay190is generated accordingly. Display characteristics of the subtitles, such as font color and size, are set according to one or more of user preferences158, readability considerations, or speaker intent detected from audio stream142as discussed further herein. To cause subtitle overlay190to be combined with a portion of the corresponding to video stream144, subtitle overlay190is merged with data from one or more frame buffers123that are finalized prior to output to display180, for example as one or more processing steps in a rendering pipeline within GPU122, or by a desktop compositor of an operating system running on computing device110. In this manner, subtitle support is provided via gaming overlay application150even when game application140does not natively support subtitles. III. Example Game Application Graphical User Interfaces Referring now toFIG.2A, an example display280A is illustrated, which corresponds to display180fromFIG.1. As depicted in display280A, game graphics282corresponding to game graphics182is shown. Display280A represents a display of game application140when subtitle overlay190is not generated or is disabled, or when gaming overlay application150is not running. In these cases, no subtitles appear and only in-game elements are shown, including character284A positioned to the left side of display280A, character284B positioned to the right side of display280A, and user interface element286displaying gameplay status including user health and ammo. Referring now toFIG.2B, an example display280B is illustrated, which corresponds to display180fromFIG.1. As depicted in display280B, subtitle overlay290B is overlaid on top of game graphics282and includes the subtitles of “(Explosion sound from the right)” and “That doesn't sound good. Let's proceed down the left hallway instead.” Note that subtitle overlay290B is positioned near the bottom of display280B, which is set, in implementations, according to user preferences158. Further, note that subtitle overlay290B avoids placement of subtitles over user interface element286, thereby maintaining visibility of vital in-game information. Referring now toFIG.2C, an example display280C is illustrated, which corresponds to display180fromFIG.1. As depicted in display280C, subtitle overlay290C and290D are overlaid on top of game graphics282. Subtitle overlay290C contains the subtitle “That doesn't sound good. Let's proceed down the left hallway instead.” Further, subtitle overlay290C is positioned to be proximate to an in-game object (e.g., character284A) associated with an in-game speaker and appears in a speech bubble. Subtitle overlay290D contains the closed caption “(Explosion sound)” and is positioned proximate to the right of display280C. In this example, subtitle overlay290D points offscreen since the explosion itself was determined to occur at a position to the right of the user that is not visible in game graphics282. In implementations, the position of audio sources in the game world are estimated according to positional cues in audio stream142. For example, stereo audio panning position is used to determine whether an audio source is located to the left, right, or center of the user's current viewpoint in the game world represented by video stream144. When multichannel or positional 3D audio is available, the position of audio sources is estimated with greater accuracy, such as in front, behind, above, or below the user's current viewpoint. In implementations, referring toFIG.1, multichannel or positional 3D audio in audio stream142indicates that the current in-game speaker is heard primarily from the left channels of speakers174. Thus, the in-game object associated with the in-game speaker is more likely be character284A, to the left, rather than character284B, to the right. Similarly, audio stream142indicates that the explosion sound is heard primarily from the right channels of speakers174. However, since no explosion graphic is detected in video stream144, the explosion itself is determined to be offscreen and further to the right. These positional audio cues are factors used to determine the positioning of subtitle overlays290C and290D within the display such that they are proximate to their sound source or in-game object associated with the in-game speaker. For example, sounds heard primarily from center or rear surround channels indicate sound sources positioned in the front center or behind the user in a game world rendered by game application140, whereas sounds heard primarily from height channels indicate sound sources positioned above the user. IV. Automatic In-Game Subtitle Generation Process To illustrate an example process for implementing automatic in-game subtitles in a gaming overlay application, flow diagram300ofFIG.3is described with respect toFIG.1andFIG.2BandFIG.2C. As described above, display280B and280C reflect examples of display180after gaming overlay application150generates subtitle overlay190for displaying with game graphics182. Flow diagram300depicts an approach for implementing automatic in-game subtitles in a gaming overlay application. In implementations, blocks302,304,306,308, and310are performed by one or more processors. In implementations, blocks302,304,306,308and310are performed by a single processor of a computing device, similar toFIG.1. In implementations, one or more of the blocks of flow diagram300are performed by one or more cloud servers or other computing devices distributed across a wireless or wired network. In block302, an audio stream142and video stream144generated as the result of executing game application140are accessed. In implementations, a gaming overlay application executing on a processor receives the audio stream and video stream. In implementations, the processor executes gaming overlay application150concurrently with game application. In some implementations, game application140executes on a remote server. For example, when using a cloud-based gaming streaming service, audio stream142and video stream144are received from a remote server via network160. In block304, the audio stream142is processed through a text conversion engine152to generate at least one subtitle. As discussed above, in implementations, text conversion engine152is part of gaming overlay application150, and in other implementations, text conversion engine152is accessed using a cloud-based service via network160. Alternatively, both a cloud-based and an internal text conversion engine152are provided, and the internal version is utilized when network160is unavailable or disconnected. In implementations, text conversion engine152also supports translation of text into the user's preferred native language and local dialect, which is defined in user preferences158. Since translation features require significant processing resources, in implementations, offloading of text conversion engine152to a cloud-based service helps to minimize processing overhead that is detrimental to the performance of game application140. In block306, a display position is determined to associate with the at least one subtitle from block304. In implementations, subtitle compositor154uses one or more factors to determine the display position. One factor includes a user defined preference for subtitle location, such as near the bottom of the screen. This user preference is retrieved from user preferences158. Another factor includes avoiding exclusion areas detected in video stream144. For example, as previously described, video stream144is scanned for user interface elements generated by game application140, and the portion of the display that includes these user interface elements are marked as exclusion areas that should not include subtitles. Yet another factor includes positioning the subtitle in proximity to the sound source or in-game speaker. For example, computer vision processing is performed to identify in-game characters, multiplayer users, and other objects within the video stream144that are potential sound sources associated with subtitles or closed captions. Once characters and objects are identified, the at least one subtitle from block304is matched to its most likely sound source and positioned proximate to its sound source within the video stream144. Matching to the most likely sound source for the at least one subtitle is based on various considerations. As discussed above, in implementations matching is based on triangulation using spatial audio cues from audio stream142. Thus, in-game objects (e.g., characters) positioned in the in-game world consistent with the spatial audio cues are more strongly correlated with the sound source. Another consideration includes matching voice traits to classifications in voice profile database156and confirming whether the matched classifications are consistent with the visual characteristics of a potential sound source. For example, voice profile database156includes classifications such as age range, gender, and dialect. Using machine learning techniques, traits analyzed from audio stream142and matched to voice profile database156are used to classify the in-game speaker as more or less likely to be a child, an adult, an elderly person, a male, a female, or a speaker with a regional dialect. The computer vision processing described above is used to confirm whether a potential sound source, or in-game character, is consistent with the matched classifications. For example, if audio stream142is classified as likely to be “female” in voice profile database156, and computer vision processing of the video stream144identifies a potential in-game character as likely to be a female character, then matching the potential in-game character to the at least one subtitle is more strongly correlated. Yet another consideration includes matching audio stream142to a specific user. For example, as discussed above, in implementations game application140is a multiplayer game wherein participants use voice chat to communicate with other participants. In this case, audio stream142includes multiple voice chat streams associated with specific users, and thus the user speaking at any given time is readily determined according to the originating voice chat stream. If audio stream142is only available as a single mixed stream, then the other considerations described above are still usable to determine the in-game speaker. Further, since game overlay application150includes identifying information such as usernames or handles for each participant, the subtitles also include such identifying information when available. In block308, a subtitle overlay190is generated comprising the at least one subtitle from block304located at the associated display position from block306. As described above, subtitle compositor154generates subtitle overlay190along with various visual characteristics of the subtitles. In implementations, these visual characteristics include font attribute (e.g. italic, bold, outline), font color, font size, and speech bubble type. Speech bubble type includes, for example, speech bubbles, floating text, or other text presentation methods. Visual characteristics are set according to user preferences158, for example user preferred font size and color. Visual characteristics are set according to readability considerations, for example by ensuring that the subtitles have high contrast according to colors in the associated area of video stream144. For example, if the subtitles are positioned in an area having mostly bright or light colors, then the subtitles use darker colors or a dark outline for greater visibility and readability. Visual characteristics are also set according to the in-game speaker, for example by mapping specific font colors for each in-game character. In implementations, visual characteristics are also set according to speaker intent detected from audio stream142. For example, audio stream142is analyzed for loudness, speech tempo, syllable emphasis, voice pitch, and other elements to determine whether the in-game speaker is calm, and in this case the display characteristics use default values. On the other hand, if analysis of audio stream142determines that the in-game speaker is excited or conveying an urgent message, then the display characteristics emphasize this by using a bold font, a larger font size, or a speech bubble that is emphasized using spiked lines or other visual indicators. Thus, the intent of the speaker is better understood in a visual manner. In block310, a portion of video stream144is caused to be displayed with subtitle overlay190. In implementations, as discussed above, this is performed by modifying a rendering pipeline within GPU122, or using a desktop compositor of an operating system, among other methods. Thus, display180outputs game graphics182with subtitle overlay190. As shown inFIG.2B, the subtitle overlay290B is placed according to a user preference for subtitle placement. Alternatively, as shown inFIG.2C, the subtitle overlay290C and290D are placed according to proximity to the sound source. In this manner, subtitle support is provided via gaming overlay application150even when game application140does not natively support subtitles.
23,995
11857878
DESCRIPTION OF EMBODIMENTS To make the objectives, technical solutions, and advantages of this application clearer, the following further describes implementations of this application in detail with reference to the accompanying drawings. First, several terms involved in this application are introduced and explained. Virtual environment: It is a virtual environment displayed (or provided) when an application is run on a terminal. The virtual environment may be a simulated environment of a real world, or may be a semi-simulated semi-fictional environment, or may be an entirely fictional environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment. In some embodiments, the virtual environment is further used for a virtual environment battle between at least two virtual characters, and there are virtual resources available to the at least two virtual characters in the virtual environment. In some embodiments, the virtual environment may include a map, and the map may include two symmetric regions. Virtual characters on two opposing camps occupy the regions respectively, and the objective is for each camp to destroy a target building deep in the opponent's region. Virtual character (also referred to as hero): It refers to a movable object in the virtual environment. The movable object may be at least one of a virtual human, a virtual animal, and an animated human character. In some embodiments, when the virtual environment is a three-dimensional virtual world, the virtual characters may be three-dimensional models. Each virtual character has its own shape and size in the three-dimensional virtual world, and occupies some space in the three-dimensional virtual world. In some embodiments, the virtual character is a three-dimensional character constructed based on three-dimensional human skeleton technology. The virtual character wears different skins to get different appearances. In some implementations, the virtual character may also be implemented by using a 2.5-dimensional model or a two-dimensional model, which is not limited in the embodiments of this application. In some embodiments of this application, the virtual characters are virtual characters that can be controlled by users in the virtual environment, and virtual characters that cannot be controlled by users (such as creeps, monsters, and non-player characters (NPCs)) may be referred to as assisting virtual characters. MOBA: It is an arena game in which different virtual teams on at least two opposing camps occupy respective map regions on a map provided in a virtual environment, and compete against each other to achieve a specific victory condition. The victory condition includes, but is not limited to, at least one of occupying forts or destroy forts of the opposing camps, killing virtual characters in the opposing camps, surviving in a specified scenario and time, seizing a specific resource, and outscoring the opponent within a specified time. The battle arena game may take place in rounds. The same map or different maps may be used in different rounds of the battle arena game. Each virtual team includes one or more virtual characters, for example, 1 virtual character, 3 virtual characters, or 5 virtual characters. MOBA game: It is a game in which several forts are provided in a virtual world, and users on different camps control virtual characters to battle in the virtual world, occupy forts or destroy forts of the opposing camp. For example, in the MOBA game, the users may be divided into two opposing camps. The virtual characters controlled by the users are scattered in the virtual world to compete against each other, and the victory condition is to destroy or occupy all enemy forts. The MOBA game takes place in rounds. A duration of a round of the MOBA game is from a time point at which the game starts to a time point at which the victory condition is met. In a typical MOBA game, a communication system in a round of battle between different users includes the following three parts. 1. Built-in voice system. A voice communication channel is formed between a plurality of users, and the users need to turn on the microphone before speaking. 2. Built-in chat system. A text-based chat channel is formed between a plurality of users, and the users need to type on a terminal for communication. 3. Built-in signal system (including: an attack button, a retreat button, an assembly button, and a minimap marking function). The attack button, the retreat button, the assembly button, and the minimap are displayed on a UI, and the user clicks/taps the buttons or the minimap to quickly initiate communication. Many MOBA games are mobile phone games (mobile games for short). A user playing a MOBA game on a mobile phone may be in a place not suitable for talking, such as in a bedroom or in a carriage, and therefore, the user cannot use the voice communication channel to communicate effectively. In addition, the use of the voice communication channel for communication may also lead to network lag. For mobile phones, typing on the touchscreen while there is already a MOBA game running on the interface requires high man-machine interaction cost, and interferes with battle operations in a round of battle. The built-in signal system can transmit only several types of prompt information. In other words, the built-in signal system has problems in at least two dimensions:1. signals that can be transmitted are too simple and limited; and2. the cost of man-machine interaction is high. The embodiments of this application provide a system for transmitting prompt information in a multiplayer online battle program (a signal system for short). Based on the system, a user may express a thought at the minimal man-machine interaction operation cost without tuning on the voice or typing function. The embodiment of this application may intelligently determine and select signals that the player wants to transmit or transfer under a current scenario according to a thought expressed by the user player and a real-time implementation status of the current battle. FIG.1is a structural block diagram of a computer system according to an exemplary embodiment of this application. The computer system100includes: a first terminal110, a server cluster120, and a second terminal130. A client111supporting a virtual environment is installed and run on the first terminal110, and the client111may be a multiplayer online battle program. When the first terminal runs the client111, a UI of the client111is displayed on a screen of the first terminal110. The client111may be any one of a military simulation program, a MOBA game, a battle royale shooting game, and a simulation game (SLG). In this embodiment, an example in which the client111is a MOBA game is used for description. The first terminal110is a terminal used by a first user101. The first user101uses the first terminal110to control a first virtual character located in a virtual environment to perform activities, and the first virtual character may be referred to as a master virtual character of the first user101. The activities of the first virtual character include, but are not limited to: at least one of adjusting body postures, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, and throwing. For example, the first virtual character is a first virtual human, for example, a simulated human character or an animated human character. A client131supporting a virtual environment is installed and run on the second terminal130, and the client131may be a multiplayer online battle program. When the second terminal130runs the client131, a UI of the client131is displayed on a screen of the second terminal130. The client131may be any one of a military simulation program, a MOBA game, a battle royale shooting game, and a simulation game (SLG). In this embodiment, an example in which the client131is a MOBA game is used for description. The second terminal130is a terminal used by a second user102. The second user102uses the second terminal130to control a second virtual character located in a virtual environment to perform activities, and the second virtual character may be referred to as a master virtual character of the second user102. For example, the second virtual character is a second virtual human, for example, a simulated human character or an animated human character. In some embodiments, the first virtual human and the second virtual human are located in the same virtual environment. In some embodiments, the first virtual human and the second virtual human may belong to the same camp, the same team, or the same organization, are friends, or have a temporary communication permission. In some embodiments, the first virtual human and the second virtual human may alternatively belong to different camps, different teams, or different organizations, or are enemies to each other. In some embodiments, the client installed on the first terminal110is the same as the client installed on the second terminal130, or the clients installed on the two terminals are clients of the same type on different operating system platforms (Android system or iOS system). The first terminal110may generally refer to one of a plurality of terminals, and the second terminal130may generally refer to another one of the plurality of terminals. In this embodiment, the first terminal110and the second terminal130are merely used as an example for description. The first terminal110and the second terminal130may be of the same or different device types, and the device type includes at least one of a smartphone, a tablet computer, an e-book reader, a moving picture experts group audio layer III (MP3) player, a moving picture experts group audio layer IV (MP4) player, a laptop, and a desktop computer. FIG.1shows only two terminals. However, a plurality of other terminals140may access the server cluster120in different embodiments. In some embodiments, one or more terminals140are terminals corresponding to a developer. A developing and editing platform for the client is installed on the terminal140. The developer may edit and update the client on the terminal140and transmit an updated client installation package to the server cluster120via a wired or wireless network. The first terminal110and the second terminal130may download the client installation package from the server cluster120to update the client. The first terminal110, the second terminal130, and the other terminals140are connected to the server cluster120via a wireless network or a wired network. The server cluster120includes at least one of one server, a plurality of servers, a cloud computing platform, and a virtualization center. The server cluster120is configured to provide a background service for a client supporting a virtual environment. In some embodiments, the server cluster120is responsible for primary computing work, and the terminal is responsible for secondary computing work; or the server cluster120is responsible for secondary computing work, and the terminal is responsible for primary computing work; or the server cluster120and the terminals (the first terminal110and the second terminal130) perform collaborative computing by using a distributed computing architecture among each other. In a schematic example, the server cluster120includes a server121and a server126. The server121includes a processor122, a user account database123, a battle service module124, and a user-oriented input/output (I/O) interface125. The processor122is configured to load instructions stored in the server121, and process data in the user account database123and the battle service module124. The user account database123is configured to store data of user accounts used by the first terminal110, the second terminal130, and the other terminals140, for example, avatars of the user accounts, nicknames of the user accounts, battle effectiveness indexes of the user accounts, and service zones of the user accounts. The battle service module124is configured to provide a plurality of battle rooms for the users to battle, for example, a 1V1 battle room, a 3V3 battle room, a 5V5 battle room, and the like. The user-oriented I/O interface125is configured to establish communication between the first terminal110and/or the second terminal130via a wireless network or a wired network for data exchange. In some embodiments, a smart signal module127is disposed in the server126, and the smart signal module127is configured to implement a method for transmitting prompt information in a multiplayer online battle program provided in the following embodiment. FIG.2is a schematic diagram of a map provided in a MOBA game virtual environment according to an exemplary embodiment of this application. The map is in the shape of a square. The map is divided diagonally into a lower left triangular region220and an upper right triangular region240. There are three lanes from a lower left corner of the lower left triangular region220to an upper right corner of the upper right triangular region240: a top lane21, a middle lane22, and a bottom lane23. In a typical round of battle, 10 virtual characters are needed, which are divided into two camps to battle. 5 virtual characters in a first camp occupy the lower left triangular region220, and 5 virtual characters in a second camp occupy the upper right triangular region240. A victory condition for the first camp is to destroy or occupy all forts of the second camp, and a victory condition for the second camp is to destroy or occupy all forts of the first camp. For example, the forts of the first camp include 9 turrets24and a first base26. Among the 9 turrets24, there are respectively 3 turrets on the top lane21, the middle lane22, and the bottom lane23. The first base26is located at the lower left corner of the lower left triangular region220. For example, the forts of the second camp include 9 turrets24and a second base27. Among the 9 turrets24, there are respectively 3 turrets on the top lane21, the middle lane22, and the bottom lane23. The second base27is located at the upper right corner of the upper right triangular region240. A location denoted by a dotted line inFIG.2may be referred to as a riverway region. The riverway region is a common region of the first camp and the second camp, and is also a border region between the lower left triangular region220and the upper right triangular region240. The MOBA game requires the virtual characters to obtain resources in the map to improve combat capabilities of the virtual characters. The resources include creeps, monsters, big and small dragons. 1. The creeps periodically appear on the top lane21, the middle lane22, and the bottom lane23. When a creep is killed, a virtual character nearby will obtain experience values and gold coins. 2. The map may be divided into 4 triangular regions A, B, C, and D by the middle lane (a diagonal line from the lower left corner to the upper right corner) and the riverway region (a diagonal line from an upper left corner to a lower right corner) as division lines. Monsters are periodically refreshed in the 4 triangular regions A, B, C, and D, and when a monster is killed, a virtual character nearby will obtain experience values, gold coins, and BUFF effects. 3. A big dragon28and a small dragon29are periodically refreshed at two symmetric positions in the riverway region. When the big dragon28and the small dragon29are killed, each virtual character in a killer camp obtains experience values, gold coins, and BUFF effects. The big dragon28may be referred to as a “dominator”, a “Caesar”, or other names, and the small dragon29may be referred to as a “tyrant”, a “magic dragon”, or other names. In an example, the top lane and the bottom lane of the riverway each have a gold coin monster, which appears at the 30th second of the game. After a gold coin monster is killed, a virtual character nearby will obtain gold coins, and the gold coin monster is refreshed after 70 seconds. Region A has a red BUFF, two normal monsters (a pig and a bird), and a tyrant (a small dragon). The red BUFF and the monsters appear at the 30th second of the game, the normal monsters are refreshed after 70 seconds upon being killed, and the red BUFF is refreshed after 90 seconds upon being killed. The tyrant appears at the 2nd minute of the game, and is refreshed after 3 minutes upon being killed. All teammates of the killer obtain gold coins and experience values after the tyrant is killed. The tyrant falls into darkness at the 9th minute and 55th second, and a dark tyrant appears at the 10th minute. A revenge BUFF of the tyrant is obtained by a virtual character who kills the dark tyrant. Region B has a blue BUFF and two normal monsters (a wolf and a bird). The blue BUFF also appears at the 30th second and is refreshed after 90 seconds upon being killed. Region C is the same as the region B, it has a blue BUFF and two normal monsters (a wolf and a bird). Similarly, the blue BUFF also appears at the 30th second and is refreshed after 90 seconds upon being killed. Region D is similar to the region A, has a red BUFF and two normal monsters (a pig and a bird). The red BUFF is also used for output increase and deceleration. There is also a dominator (a big dragon). The dominator appears at the 8th minute of the game and is refreshed after 5 minutes upon being killed. A dominator BUFF, a fetter BUFF, and dominant pioneers (sky dragons that are manually summoned) on the lanes may be obtained after the dominator is killed. In an example, the BUFFs are explained in detail: The red BUFF lasts for 70 seconds and carries continuous burning injuries and deceleration with an attack. The blue BUFF lasts for 70 seconds and may shorten a cooldown (CD) time and restore additional mana per second. The dark tyrant BUFF and the fetter BUFF are obtained after the dark tyrant is killed. The dark tyrant BUFF increases physical attacks (80+5% of a current attack) for the whole team and increase magic attacks (120+5% of a current magic attack) for the entire team for 90 seconds. The fetter BUFF reduces an output for the dominator by 50%, and the fetter BUFF does not disappear when the virtual character is killed and lasts for 90 seconds. The dominator BUFF and the fetter BUFF can be obtained by killing the dominator. The dominator may improve life recover and mana recover for the whole team by 1.5% per second and last for 90 seconds. The dominator BUFF disappears when the virtual character is killed. The fetter BUFF reduces an output for the dark tyrant by 50%, and the fetter BUFF does not disappear when the virtual character is killed and lasts for 90 seconds. The following benefits may be obtained after the dominator is killed. 1. All the teammates obtain 100 gold coins, and whether a master virtual character has participated in fighting against the dominator or not, the master virtual character obtains effects, including a master virtual character that is in a resurrection CD. 2. From a moment that the dominator is killed, next three waves (three lanes) of creeps of the killer party are replaced with the dominant pioneers (flying dragons). The dominant pioneers are very strong and attack in the three lanes at the same time, which brings a great creep line pressure on the opposing team. The opposing team needs to defense in three lanes. An alarm of the dominant pioneers is shown in the map, and during the alarm, there will be a hint of the number of waves of the coming dominant pioneers (usually three waves). The combat capabilities of the 10 virtual characters include two parts: level and equipment. The level is obtained by using accumulated experience values, and the equipment is purchased by using accumulated gold coins. The 10 virtual characters may be obtained by matching 10 user accounts online by a server. For example, the server matches 2, 6, or 10 user accounts online for competition in the same virtual world. The 2, 6, or 10 virtual characters are on two opposing camps. The two camps have the same quantity of corresponding virtual characters. For example, there are 5 virtual characters on each camp. Types of the 5 virtual characters may be a warrior character, an assassin character, a mage character, a support (or meat shield) character, and an archer character respectively. The battle may take place in rounds. The same map or different maps may be used in different rounds of battle. Each camp includes one or more virtual characters, for example, 1 virtual character, 3 virtual characters, or 5 virtual characters. FIG.3is a flowchart of a method for transmitting prompt information in a multiplayer online battle program according to an exemplary embodiment of this application. The method may be performed by any terminal inFIG.1, and the method includes the following steps. Step301. Display a UI of a multiplayer online battle program. The multiplayer online battle program is a program that allows at least two users to control virtual characters to battle in a virtual environment. The virtual environment is a battle environment configured for at least two virtual characters to battle. The multiplayer online battle program may be any one of a military simulation program, a MOBA game, a battle royale shooting game, and an SLG. In this embodiment, an example in which the multiplayer online battle arena is a MOBA game is used for description. In an example, as shown inFIG.4, the UI30of the multiplayer online battle program includes a virtual environment image32and an interaction panel region34. The virtual environment image32is an image of the virtual environment observed from a perspective corresponding to a master virtual character38. The master virtual character is a virtual character controlled by a user using the terminal in the virtual environment. The perspective corresponding to the master virtual character may be any one of a first-person perspective, a 45° bird's-eye view, a third-person perspective, and an over-shoulder perspective of the master virtual character. An example uses the 45° bird's-eye view for description in this embodiment. When the master virtual character38moves or rotates, the virtual environment image changes accordingly. The master virtual character38may appear in the virtual environment image or may not appear in the virtual environment image. The interaction panel region34is a UI element superimposed on the virtual environment image32. The interaction panel region34is divided into two types: information display elements used for displaying information and control function elements used for man-machine interaction. The interaction panel region34is also referred to as a HUD region. For example, as shown inFIG.5, the HUD region34includes: a minimap region01, a friend information region02, a scoreboard03, a device information region and master virtual character score region04, a menu region05, a minimap region extension button06, a control button07, a chatting button08, skill buttons09of the master virtual character, an attack skill button10of the master virtual character, a summoner ability11, a restore skill12, a recall skill13, a moving control14, a gold coin region15, and recommended equipment16. The friend information region02, the scoreboard03, and the device information region and master virtual character score region04are the information display elements, and the other elements are the control function elements. The interaction panel region34may include other elements, such as a death panel, a turret-attacking button, and a creep-attacking button, which is not limited in the embodiments. After a user starts a round of battle, the UI of the multiplayer online battle program is displayed. Step302. Receive a directional operation on the UI, the directional operation being an operation for activating a prompt information transmission function and pointing to a target display element in the UI. When the user needs to transmit prompt information, the user applies the directional operation on the UI. The directional operation may be a user operation, or may be an operation combination formed by two or more user operations. The directional operation is an operation for activating a prompt information transmission function and pointing to a target display element in the UI. The target display element is one of a plurality of display elements in the UI. In an example, the directional operation may be an operation of a double-tap, a triple-tap, or a long-pressing on the target display element. In another example, the interaction panel region includes a signal control. The directional operation is an operation pointing from the signal control to a target display element in the UI. The signal control is a control for activating the prompt information transmission function. The plurality of display elements in the UI include, but are not limited to, at least one of the following elements. 1. Three-dimensional models forming battle function elements (non-decorative elements and non-visual presentation elements) in the virtual environment. For example, the battle function elements are elements that influence the battle process in the virtual environment. The three-dimensional models include, but are not limited to: virtual characters, turrets, bases, monsters, grass, detection eyes, a big dragon, a small dragon, and the like. 2. The information display elements in the interaction panel region. For example, the information display elements include: the friend information region02, the scoreboard03, the device information region and master virtual character score region04, and the death panel not shown inFIG.5. 3. The control function elements in the interaction panel region. For example, the control function elements include: the minimap region01, the minimap extension button06, the button control07, the chatting control08, the skill buttons09of the master virtual character, the attack skill button10of the master virtual character, the summoner ability11, the moving control14, the gold coin region15, and the recommended equipment16. For example, the control function elements further include a fast signal button. The fast signal button includes, but is not limited to an attack button, a retreat button, and an assembly button (not shown inFIG.5). Step303. Predict target prompt information according to the target display element and battle information. The target display element is a display element selected by the directional operation. The battle information is battle situation information during a round of battle. The battle information includes, but is not limited to the following: a started duration of this round of battle, levels of virtual characters, skill upgrade information, skill cooldown information, summoner ability types, health points (HP) of the virtual characters, HP of turrets, a position relationship between the virtual characters and grass, creep line situations, monsters refresh situations, network information, teammate positions, enemies positions, kill information, death information, date information, festival information, location information, matches information, and camp names of camps. For example, the prompt information is information for performing an information prompt for the teammates, the enemies, or all the virtual characters. A form of the prompt information includes, but is not limited to at least one of text, voices, icons, animation, and vibration feedback. In an example, the prompt information includes information of two types: fact information and intention information. The fact information is information representing existing facts in a current battle, for example, a monster is refreshed, a turret is being attacked, I see an enemy, and the like. The intention information is information representing a strategy intention of the user, for example, pay attention to pushing the creep line on the top lane, beware of ambush in the grass, the economy is so poor and we need to grow, and the like. Step304. Transmit the predicted target prompt information to clients of teammate virtual characters of a master virtual character or all virtual characters in a battle. In an example, one or more virtual characters in a battle of the multiplayer online battle program are determined for receiving the predicted target prompt information and the predicted target prompt information is transmitted to clients of teammate virtual characters (the master virtual character itself is included optionally) of the master virtual character. In another example, the predicted target prompt information is transmitted to clients all virtual characters in a battle. In another example, the target prompt information may be also transmitted to a coach client, a referee client, or an audience client who view this battle. In an example, the client transmits a frame synchronization signal to a server, the frame synchronization signal carrying the target prompt information. The server transmits the frame synchronization signal to other clients corresponding to the teammate virtual characters (the master virtual character itself is included optionally) of the master virtual character. Alternatively, the server transmits the frame synchronization signal to other clients of all the virtual characters in the battle. The other clients display or play the target prompt information according to the frame synchronization signal. In conclusion, according to the method provided in this embodiment, when a directional operation is received, target prompt information expected to be transmitted is predicted according to a target display element and battle information, and the target prompt information is transmitted to clients of teammate virtual characters of a master virtual character or all virtual characters in a battle. Therefore, a user may transmit prompt information that satisfies the user's expectation at the minimal man-machine interaction cost. A multiplayer online battle program intelligently determines and selects a signal that the user wants to transmit in a current battle scenario based on a thought expressed by the user and the battle information, thereby improving the man-machine interaction efficiency in information communication between the user and other users. Triggering of the directional operation may be implemented in at least one of the following manners. 1. Triggered by a long press. The directional operation is triggered in a “long press a signal control+tap a target display element” manner. 2. Triggered by a drag. The directional operation is triggered in a “slide operation” manner. 3. Triggered by a point touch on a minimap. The directional operation is triggered by using a tap operation on the minimap. 4. Triggered by a combo. The directional operation is triggered by “double-tapping” or “triple-tapping” the target display element. For the first manner: the long press triggering manner, reference is made to the following embodiment. FIG.6is a flowchart of a method for transmitting prompt information in a multiplayer online battle program according to an exemplary embodiment of this application. The method may be performed by any terminal inFIG.1. An example in which an interaction panel region includes a button control as a signal control (a signal button for short) is used for description, and the foregoing step302may be implemented into the following steps. Step302-1. Receive a first touch operation applied on the button control on a UI. The first touch operation is a tap operation or a long press operation, and an example in which the first touch operation is the long press operation is used for description in this embodiment. When a user expects to transmit prompt information, the user long presses the signal button. Step302-2. Display one or more candidate display elements on the UI. The candidate display elements are display elements that support further triggering of a subsequent directional operation based on the first touch operation among the plurality of display elements displayed on the UI, that is, are display elements that can continuously trigger a prompt information transmission on the UI. In an example, display manners of the one or more candidate display elements displayed on the UI remain unchanged. In another example, the one or more candidate display elements are displayed on the UI in a highlight manner, the highlight manner including at least one of the following display manners: a target color display manner, an overlay masking display manner, a highlight display manner, and a contour display manner. For example, the display elements that can trigger a prompt information transmission in a HUD region are highlighted and contoured for display in the UI. Step302-3. Receive a second touch operation applied on the target display element in the candidate display elements on the UI. The second touch operation is a tap operation, a double-tap operation, a combo operation, or a long press operation, and an example in which the second touch operation is the tap operation is used for description in this embodiment. For example, referring toFIG.7, when the prompt information needs to be transmitted, the user first long presses a signal button07, and then each display element that can trigger a prompt information transmission in the HUD region, is displayed in bold. The user may select a death panel17as the target display element, that is, the user may tap the death panel17. After determining the death panel17as the target display element, the multiplayer online battle program obtains battle information of “There are 2 seconds left before the master virtual character control by the user itself resurges” to generate target prompt information of “Master virtual character: I have 2 seconds to resurge”, and transmits the target prompt information to teammates in a form of a chat message. In conclusion, according to the method provided in this embodiment, the directional operation is triggered by using two touch operations. Original interaction design manners such as a point touch operation and a skill button are the same, and therefore, learning cost of the user can be reduced. A cognition threshold is low and an operation is simple, and it is easy for basic and normal-end users to learn and use. For the second manner: the drag triggering manner, reference is made to the following embodiment. FIG.8is a flowchart of a method for transmitting prompt information in a multiplayer online battle program according to an exemplary embodiment of this application. The method may be performed by any terminal inFIG.1. An example in which a HUD region includes a button control as a signal control (a signal button for short) is used for description, and the foregoing step302may be implemented into the following steps. Step302-A. Receive a slide operation on a UI, a slide starting point of the slide operation being the button control, and a slide end point of the slide operation being a target display element. When a user expects to transmit prompt information, the user slides (or drags) from the signal button and releases the touch after sliding to the target display element. Step302-B. Display an auxiliary line (or a movement route) pointing from the slide starting point to the slide end point on the UI when the slide operation is received on the button control. To display feedback on the slide operation of the user, as an optional step, the multiplayer online battle program displays the auxiliary line (or the movement route) pointing from the slide starting point to the slide end point on the UI when the slide operation is received on the button control. For example, the auxiliary line is a ray pointing from the button control to the slide end point. The movement route is a ray or a curve formed by following a sliding track of a touched object. For example, after the slide operation is received, a preset range around the button control shows additional displays: an attack button, a retreat button, and an assembly button. If the slide end point of the slide operation is the attack button, prompt information of “launch an attack” is transmitted rapidly. If the slide end point of the slide operation is the retreat button, prompt information of “start a retreat” is transmitted rapidly. If the slide end point of the slide operation is the assembly button, prompt information of “request to assemble” is transmitted rapidly. For example, referring toFIG.9, when the prompt information needs to be transmitted, the user first long presses a signal button07, and then each display element that can trigger a prompt information transmission, in the HUD region, is displayed in bold. The user may select a skill button09of a master virtual character as the target display element, that is, the user may trigger a slide operation from the signal button07to the skill button09of the master virtual character. After the slide operation is received by the signal button07, an auxiliary line from the signal button07to the skill button09of the master virtual character may be displayed on the UI. After determining the skill button09of the master virtual character as the target display element, the multiplayer online battle program obtains battle information of “a skill2of the master virtual character controlled by the user itself has 2 seconds left to refresh” to generate target prompt information of “Master virtual character: My skill2can be refreshed after 2 seconds”, and transmits the target prompt information to teammates in a form of a chat message. In conclusion, according to the method provided in this embodiment, a directional operation is triggered by using a slide operation. In a fierce battle, a drag-type slide operation may shorten a time required for the operation in a case of high proficiency and a fast hand speed, and therefore, communication efficiency of high-end users is improved. In addition, when the user releases the touch, the button control, the auxiliary line, and the three additionally displayed rapid signal buttons related to the drag-type slide operation may all restore and may not block the vision, and may get back to the fierce battle operation faster than the point touch operation does. For the third manner: the point touch triggering manner based on a minimap, reference is made to the following embodiment. FIG.10is a flowchart of a method for transmitting prompt information in a multiplayer online battle program according to an exemplary embodiment of this application. The method may be performed by any terminal inFIG.1. An example in which a HUD region includes a map expansion control (also referred to as a minimap button) as a signal control is used for description, and the foregoing step302may be implemented into the following steps. Step302-a. Receive a third touch operation applied on the map expansion control on a UI. The third touch operation may be a tap operation, a double-tap operation, a combo operation, or a long press operation, and an example in which the third touch operation is the tap operation is used for description in this embodiment. When a user expects to transmit prompt information, the user taps the map expansion control. Step302-b. Display a map viewing control of a virtual environment on the UI according to the third touch operation. The map viewing control is a control for viewing the map of the virtual environment from a God's perspective by using a control with a target area size. The target area size is greater than an area size of the minimap. The map viewing control is also referred to as a middle map control. In some embodiments, display elements that may be triggered are displayed in a highlight manner on the map viewing control. For example, the map viewing control displays at least one of the following display elements: a top lane, a middle lane, a bottom lane, turrets, monster points, a big dragon point, and a small dragon point. The big dragon point and the small dragon point may be considered as monster points that can provide team gains. Step302-c. Receive a fourth touch operation applied on a target display element on the map viewing control. The fourth touch operation may be a tap operation, a double-tap operation, a combo operation, or a long press operation, and an example in which the fourth touch operation is the tap operation is used for description in this embodiment. For example, referring toFIG.11, when the prompt information needs to be transmitted, a user first taps a minimap button06, and the multiplayer online battle program displays the map viewing control (the middle map control)18on the UI in an overlay manner, the map viewing control18displaying the top lane, the middle lane, the bottom lane, the turrets, the monster points, the big dragon point, and the small dragon point. The user may tap to select the big dragon point as the target display element on the map viewing control18. After determining the big dragon point as the target display element, the multiplayer online battle program obtains battle information of “The big dragon has been refreshed for 2 seconds” to generate target prompt information of “Master virtual character: Beware of the enemy killing the big dragon furtively”, and transmits the target prompt information to teammates in a form of a chat message. In conclusion, according to the method provided in this embodiment, a directional operation is triggered based on the map viewing control. Display elements located outside the vision range of a virtual environment image can be selected, and therefore, the target display element may be not only selected within the vision range, and all display elements on the map, especially relatively large or macro display elements (such as the top lane, the middle lane, and the bottom lane) may be selected rapidly, thereby improving applicability and functionality of the foregoing method for transmitting prompt information. For any one of the first manner to the third manner, with reference toFIG.5, when a user controls a virtual character, the user usually controls the virtual character to move by using the left thumb, and controls the virtual character to release skills by using the right thumb. Most users continuously press a moving control (also referred to as a left hand wheel) with the left thumbs and press button controls by using intermittent touch operations with the right thumbs. Therefore, in order not to interrupt continuous operations of the left thumbs of the users, the moving control14and the signal button07are located at two marginal regions far away from each other on the UI. For example, the moving control14is located at a left side marginal region of the UI, and the signal button07is located at a right side marginal region of the UI. Similarly, the moving control14and the map expansion control06are also located at two marginal regions far away from each other on the UI. For example, the moving control14is located at the left side marginal region of the UI, and the map expansion control06is located at a right side marginal region of the UI. For the fourth manner: the combo triggering manner, reference is made to the following embodiment. FIG.12is a flowchart of a method for transmitting prompt information in a multiplayer online battle program according to an exemplary embodiment of this application. The method may be performed by any terminal inFIG.1. An example in which a HUD region includes a map expansion control (also referred to as a minimap button) as a signal control is used for description, and the foregoing step302may be implemented into the following steps. Step302-C. Receive a continuous touches operation applied on a target display element on a UI, the continuous touches operation including at least two continuous tap operations. The continuous touches operation may be a double-tap operation, a triple-tap operation, or an operation of more taps, and an example in which the continuous touches operation is the double-tap operation is used for description in this embodiment. When a user expects to transmit prompt information, the user double-taps the target display element on the UI. For example, referring toFIG.13, when the prompt information needs to be transmitted, the user double-taps a scoreboard03. After determining the scoreboard03as the target display element, the multiplayer online battle program obtains battle information of “A killing scores ratio is 3:4” to generate target prompt information of “Master virtual character: The killing scores gap is not big, be patient to grow, and don't be cocky”, and transmits the target prompt information to teammates in a form of a chat message. This embodiment of this application does not limit the triggering manners of the directional operation, and the foregoing triggering manners may be also combined into a new embodiment freely. For example, the drag triggering manner is combined with the minimap triggering manner, and as shown inFIG.14, when a user needs to transmit prompt information, the user first slides from the signal button07to the map expansion control06, where a map viewing control is displayed after the map expansion control06is triggered, and then slides (slides continuously without interruptions) to a big dragon position in the map viewing control. The multiplayer online battle program determines that the big dragon is a target display element, and transmits target prompt information of “Beware of the enemy killing the big dragon furtively” with reference to current battle situation information. In some embodiments based on the foregoing embodiments, the prediction process of the “target prompt information” is implemented by using a behavior tree. The behavior tree includes a correspondence between display elements, battle information, and prompt information. For example, the behavior tree stores the correspondence between the display elements, the battle information, and the prompt information by using a tree structure. The multiplayer online battle program queries, according to a target display element and battle information, the behavior tree for target prompt information expected to be transmitted. Different target display elements are described respectively in the following. For example, there is one or more behavior trees. The querying, according to a target display element and battle information, the behavior tree for target prompt information expected to be transmitted may be implemented in forms including but not limited to the following. 1. The target display element includes skill class display elements, the battle information includes a skill availability status. The skill class display elements are controls for performing skills. In some embodiments, there may be one or more skill class display elements disposed on an interaction panel region, for example, hero skills, a summoner ability, and a skill obtained temporarily and having a valid duration. In a case that the target display element is the skill class display element and a skill availability status corresponding to the target display element is unavailable, first target prompt information used for indicating that a skill is unavailable (disabled or cooled down) is determined. In a case that the target display element is the skill class display element and a skill availability status corresponding to the target display element is available, second target prompt information used for indicating that a skill is available is determined. 2. The target display element includes skill class display elements, and the battle information includes a skill CD time. In a case that the target display element is the skill class display element and a skill CD time corresponding to the target display element is valid, third target prompt information used for indicating the skill CD time is determined. 3. The target display element includes resource class display elements, and the battle information includes vision information of the resource class display element. The resource class display elements are display elements configured for providing gold coin resources, blood volume resources, magic resources, and BUFF resources. In some embodiments, there may be one or more resource class display elements disposed in a virtual world, for example, monster points, a big dragon point, a small dragon point, and a bone dragon point. In a case that the target display element is the resource class display element and the vision information corresponding to the target display element has vision of an enemy, fourth target prompt information used for indicating that the enemy is obtaining the resource is determined. 4. The target display element includes resource class display elements, and the battle information includes refresh information of the resource class display element. The resource class display elements may be one-time or may be refreshed for a plurality of times. For example, after a group of monsters in the monster point is killed, a new group of monsters are refreshed automatically after waiting for a duration. In a case that the target display element is the resource class display element and the refresh information is valid, fifth target prompt information used for indicating a remaining refresh time of the resource class display element is determined. 5. The target display element includes virtual character class display elements, and the battle information includes a state of the virtual character class display element. The virtual character class display elements are virtual characters controlled by users, including virtual characters of the user's own side (the user's own camp) and virtual characters of the enemy side (the opposing camp or the enemy camp). In some embodiments, there may be one or virtual character class display elements disposed in the virtual world. In a case that the target display element is the virtual character class display element and the state of the target display element is a first designated state, sixth target prompt information used for indicating that the virtual character class display element is in the first designated state is determined. The state includes at least one of a blood volume state, a magic state, a recall state, a moving state, an attack state, an attacked state, an equipment state, a level state, and an external wearing state. 6. The target display element includes construction class display elements, and the battle information includes a state of the construction class display element. The construction class display elements are constructions three-dimensional models fixed still in the map. In some embodiments, there may be one or more construction class display elements disposed in the virtual world, for example, turrets, a base defense, or constructions summoned by hero's skills. In a case that the target display element is the construction class display element and the state of the target display element is a second designated state, seventh target prompt information used for indicating that the construction class display element is in the second designated state is determined. 7. The target display element includes a network information display element, and the battle information includes network speed information. The network information display element is a display element used for indicating current network speed information of a terminal, and is usually disposed on the interaction panel region. In a case that the target display element is the network information display element and a network speed of the network speed information is less than a first threshold, eighth target prompt information used for indicating a network speed state is determined. 8. The target display element includes a hardware performance display element, and the battle information includes a hardware working performance parameter. The hardware performance display element is a display element used for indicating current hardware performance (CPU or frame rate) of the terminal, and is usually disposed on the interaction panel region. In a case that the target display element is the hardware performance display element and the hardware working performance parameter is less than a second threshold, ninth target prompt information used for indicating a network speed state is determined. 9. The target display element includes a prompt information display element, and the battle information includes message content in the prompt information display element. The prompt information display element is a display element used for displaying prompt information transmitted by others (or may be the user itself), for example, a dialog box. In a case that the target display element is the prompt information display element, tenth target prompt information used for automatically replying to the message content is provided. Target display element: a skill A of a master virtual character. The master virtual character has a plurality of skills, for example, has three skills or four skills, and the skill A is one of the plurality of skills. When the target display element is the skill A of the master virtual character, referring toFIG.15, the foregoing step303may be implemented into the following steps. Step401. Determine whether the skill A has been upgraded (for at least one level). Upgrade means that after an experience value of the master virtual character reaches a threshold value, the skill A is obtained (or the skill A of a higher level). Step402is performed when the skill A has been upgraded. Step403is performed when the skill A has not been upgraded. Step402. Determine whether the skill A is disabled. Step404is performed when the skill A is disabled. Step405is performed when the skill A is not disabled. Step403. Determine the target prompt information as “the skill A has not been upgraded”. Step404. Determine the target prompt information as “the skill A is disabled”. Step405. Determine whether the skill A is cooled down. Step406is performed when the skill A is in a cooled-down state. Step407is performed when the skill A is in an available state. Step406. Determine the target prompt information as “the skill A has X seconds of CD”. For example, X is a variable and is determined by the multiplayer online battle program according to a count number in a cooldown timer of the skill A Step407. Determine the target prompt information as “the skill A has been ready”. For example, as shown inFIG.16, a user applies a slide operation on a UI, a slide starting point of the slide operation being a signal button, and a slide end point of the slide operation being a skill button of the skill A. The multiplayer online battle program determines the target prompt information as “the skill A has X seconds of CD” with reference to current battle information. Target display element: a summoner ability B. A master virtual character may select to carry a summoner ability B before the game starts, for example, a flash ability. The summoner ability B is one of a plurality of summoner abilities. For example, the summoner abilities include: Heal, Sprint, Smite, Shut, Anger of, Provoke, Daze, Purify, Cripple, and Flash. When the target display element is the summoner ability B, referring toFIG.17, the foregoing step303may be implemented into the following steps. Step501. Determine whether the summoner ability B is cooled down. Step502is performed when the skill A is in a cooled-down state. Step503is performed when the skill A is in an available state. Step502. Determine the target prompt information as “the summoner ability B has X seconds of CD”. Step503. Determine the target prompt information as “the summoner ability B has been ready”. For example, as shown inFIG.18, an example in which the summoner ability B is “flash” is used. A user applies a slide operation on a UI, a slide starting point of the slide operation being a signal button, and a slide end point of the slide operation being an ability button11of the summoner ability “flash”. The multiplayer online battle program determines the target prompt information as “The flash has been ready with reference to current battle information. The flash is an operation manner of teleporting a short distance. Target display element: a restore spell. The restore spell is a spell configured for restoring an HP, a magic point, or an HP+a magic point of the master virtual character. When the target display element is the restore spell, referring toFIG.19, the foregoing step303may be implemented into the following steps. Step601. Determine whether the restore spell is cooled down. Step602is performed when the skill A is in a cooled-down state. Step605is performed when the skill A is in an available state. Step602. Determine whether a personal blood volume is lower than X %. Step603is performed when the blood volume is lower than X %. Step604is performed when the blood volume is not lower than X %. Step603. Determine the target prompt information as “I need a heal”. Step604. Determine the target prompt information as “The restore spell has X seconds of CD”. Step605. Determine the target prompt information as “Please restore state in time”. For example, as shown inFIG.20, a user applies a slide operation on a UI, a slide starting point of the slide operation being a signal button, and a slide end point of the slide operation being a spell button12of the “restore” spell. The multiplayer online battle program determines the target prompt information as “I need a heal” with reference to current battle information. Target display element: a recall (base) spell. The recall spell is a spell configured for teleporting back to a base of the user's camp after a chant of a predetermined duration. When the target display element is the recall spell, referring toFIG.21, the foregoing step303may be implemented into the following steps. Step701. Determine whether the virtual character is recalling. Step702is performed when the virtual character is in the “recalling” state. Step703is performed when the virtual character is not in the “recalling” state. Step702. Determine the target prompt information as “I am recalling”. Step703. Determine whether a personal blood volume is lower than X %. Step704is performed when the blood volume is lower than X %. Step705is performed when the blood volume is not lower than X %. Step704. Determine the target prompt information as “I need to go back to a spring to restore”. Step705. Determine whether an energy bar is lower than Y %. Step704is performed when the energy bar is lower than Y %. Step706is performed when the energy bar is not lower than Y %. Step706. Determine the target prompt information as “Don't be cocky with a poor condition, and please go back to the spring to restore”. For example, as shown inFIG.22, a user applies a slide operation on a UI, a slide starting point of the slide operation being a signal button, and a slide end point of the slide operation being a spell button13of the “recall” spell. The multiplayer online battle program determines the target prompt information as “I am recalling” with reference to current battle information. Target display element: a network speed identifier in a device information region. The network speed identifier is configured for indicating a signal quality of a mobile network or a wireless fidelity (Wi-Fi) network to which a terminal is currently connected. Referring toFIG.23, the foregoing step303may be implemented into the following steps. Step801. Determine whether the network speed is slower than X ms. Step802is performed when the network speed is slower than X ms. Step803is performed when the network speed is not slower than X ms. Step802. Determine the target prompt information as “My network speed is very stable, and please rest assured”. Step803. Determine the target prompt information as “I am sorry, my network may be unstable”. For example, as shown inFIG.24, a user applies a slide operation on a UI, a slide starting point of the slide operation being a signal button, and a slide end point of the slide operation being a device information region04including the “network speed identifier”. The multiplayer online battle program determines the target prompt information as “I am sorry, my network may be unstable” with reference to current battle information. Target display element: a scoreboard. The scoreboard is an information display control configured for recording a killing score ratio quantity of virtual characters of two opposing camps. Referring toFIG.25, the foregoing step303may be implemented into the following steps. Step901. Determine whether this round has been started for less than 1 minute. Step902is performed when the round has been started for less than 1 minute. Step903is performed when the round has been started for greater than or equal to 1 minute. Step902. Determine the target prompt information as “I have confirmed that you are not the people who deceive me”. Step903. Determine whether this round has been started for more than 18 minutes. Step904is performed when the round has been started for more than 18 minutes. Step915is performed when the round has been started for less than or equal to 18 minutes. Step904. Determine whether a killing score ratio is ahead. Step905is performed when the killing score ratio is ahead. Step910is performed when the killing score ratio is not ahead. Step905. Determine whether a total economy difference is within x. Step906is performed when the total economy difference is within x. Step907is performed when the total economy difference exceeds x. Step906. Determine the target prompt information as “The economy difference is not big, don't be cocky”. Step907. Determine whether the total economy is ahead. Step908is performed when the total economy is ahead. Step909is performed when the total economy falls behind. Step908. Determine the target prompt information as “We have a great advantage, please keep it up”. Step909. Determine the target prompt information as “Our economy falls behind, be patient to grow”. Step910. Determine whether a total economy difference is within x. For example, the threshold x varies with time. For example, a started duration of this round is y, and a threshold x is that:an interval of the started duration of this round: 0 seconds to 180 seconds, x=851;an interval of the started duration of this round: 180 seconds to 360 seconds, x=3(y−180)+851;an interval of the started duration of this round: 180 seconds to 360 seconds, x=1.28(y−180)+1391;an interval of the started duration of this round: 180 seconds to 360 seconds, x=7.86(y−180)+1623;an interval of the started duration of this round: 180 seconds to 360 seconds, x=3.09(y−180)+3038; andan interval of the started duration of this round: 900 seconds to 1080 seconds, x=2.77(y−180)+3595. Step911is performed when the total economy difference is within x. Step912is performed when the total economy difference exceeds x. Step911. Determine the target prompt information as “Hold on, the economy difference is not big”. Step912. Determine whether the total economy is ahead. Step913is performed when the total economy is ahead. Step914is performed when the total economy falls behind. Step913. Determine the target prompt information as “Our economy is ahead, please go ahead steadily and surely”. Step914. Determine the target prompt information as “Please pay attention to cooperate, we can win”. Step915. Determine the target prompt information as “With you in the regiment battle, the victory and defeat are wonderful”. For example, as shown inFIG.26, a user applies a slide operation on a UI, a slide starting point of the slide operation being a signal button, and a slide end point of the slide operation being a scoreboard03. The multiplayer online battle program determines the target prompt information as “Our economy is ahead, please go ahead steadily and surely” with reference to current battle information. Target display element: a top lane/middle lane/bottom lane of a map viewing control. Referring toFIG.27, the foregoing step303may be implemented into the following steps. Step1001. Determine whether this round has been started for less than X minutes. Step1002is performed when the round has been started for less than X minutes. Step1013is performed when the round has been started for greater than or equal to X minutes. Step1002. Determine whether the master virtual character selects a top lane/middle lane/bottom lane region and the master virtual character is in the region. Step1003is performed when the master virtual character selects the top lane/middle lane/bottom lane region and the master virtual character is in the region; otherwise, step1010is performed. Step1003. Determine whether a first turret (the first turret) on the lane exists. Step1004is performed when the first turret exists. Step1007is performed when the first turret does not exist. Step1004. Determine whether there is no vision of any enemy virtual character within the range of the lane. Step1005is performed when there is no vision of any enemy virtual character. Step1006is performed when there are visions of X enemy virtual characters. Step1005. Determine the target prompt information as “The enemy on the top lane/middle lane/bottom lane is missing, attention please”. Step1006. Determine the target prompt information as “There are X enemies on the top lane/middle lane/bottom lane”. Step1007. Determine whether an enemy creep crosses the riverway (enter our region). Step1008is performed when an enemy creep crosses the riverway. Step1009is performed when an enemy creep does not cross the riverway. Step1008. Determine the target prompt information as “Pay attention to pushing the creep line on the top lane/middle lane/bottom lane”. Step1009. Determine the target prompt information as “Pay attention to the top lane/middle lane/bottom lane”. Step1010. Determine whether a movement distance is shortened in 0.2 second. Determine whether a movement distance of the master virtual character controlled by the user is shortened in 0.2 second. Step1011is performed when the movement distance is shortened. Step1012is performed when the movement distance is not shortened. Step1011. Determine the target prompt information as “I am on the way, and may arrive in x seconds approximately”. Step1012. Determine the target prompt information as “Caution”. Step1013. Determine whether the master virtual character selects a top lane/middle lane/bottom lane region and the master virtual character is in the region. Step1014is performed when the master virtual character selects the top lane/middle lane/bottom lane region and the master virtual character is in the region; otherwise, step1015is performed. Step1014. Determine the target prompt information as “Let me push the creep line on the top lane/middle lane/bottom lane”. Step1015. Determine whether an enemy creep crosses the riverway (enter our region). Step1016is performed when an enemy creep crosses the riverway. Step1017is performed when an enemy creep does not cross the riverway. Step1016. Determine the target prompt information as “Pay attention to pushing the creep line on the top lane/middle lane/bottom lane”. Step1017. Determine whether a movement distance is shortened in 0.2 second. Determine whether a movement distance of the master virtual character controlled by the user is shortened in 0.2 second. Step1018is performed when the movement distance is shortened. Step1019is performed when the movement distance is not shortened. Step1018. Determine the target prompt information as “I am on the way, and may arrive in x seconds approximately”. Step1019. Determine the target prompt information as “Caution”. For example, as shown inFIG.28, a user applies a slide operation on a UI, a slide starting point of the slide operation being a signal button, and a slide end point of the slide operation being a middle lane region on the map viewing control. The multiplayer online battle program determines the target prompt information as “Pay attention to the middle lane” with reference to current battle information. Target display element: a big dragon (or a small dragon) in a map viewing control. The big dragon (or the small dragon) is a monster that gives BUFF gains to all virtual characters in a camp of the killer after the big dragon (or the small dragon) is killed. Referring toFIG.29, the foregoing step303may be implemented into the following steps. Step1101. Determine whether the big dragon (or the small dragon) survives. Step1102is performed when the big dragon (or the small dragon) survives. Step1111is performed when the big dragon (or the small dragon) does not survive. Step1102. Determine whether there is a vision within the range of the big dragon (or the small dragon). Step1103is performed when there is a vision; otherwise, step1110is performed. Step1103. Determine whether there is an enemy virtual character within the vision. Step1104is performed when there is an enemy virtual character within the vision. Step1109is performed when there is no enemy virtual character within the vision. Step1104. Determine whether the big dragon (or the small dragon) has a full HP. Step1105is performed when the big dragon (or the small dragon) has a full HP. Step1106is performed when the big dragon (or the small dragon) does not have a full HP. Step1105. Determine the target prompt information as “Caution, there is an enemy hero near the big dragon (or the small dragon)”. Step1106. Determine whether a blood volume of the big dragon (or the small dragon) is lower than X %. Step1107is performed when the blood volume is lower than X %. Step1108is performed when the blood volume is not lower than X %. Step1107. Determine the target prompt information as “Caution! The blood volume of the big dragon (or the small dragon) is relatively low”. Step1108. Determine the target prompt information as “Caution! The blood volume of the big dragon (or the small dragon) is under attack”. Step1109. Determine the target prompt information as “Attacking the big dragon (or the small dragon)”. Step1110. Determine the target prompt information as “The big dragon (or the small dragon) has x seconds left to be born”. For example, as shown inFIG.30, a user applies a slide operation on a UI, a slide starting point of the slide operation being a signal button, and a slide end point of the slide operation being a small dragon region on the map viewing control. The multiplayer online battle program determines the target prompt information as “The small dragon is under attack” with reference to current battle information. Target display element: a red (blue) BUFF in a map viewing control. The red (blue) BUFF is a gain effect obtained after a target monster in the jungle is killed. Referring toFIG.31, the foregoing step303may be implemented into the following steps. Step1201. Determine whether the red (blue) BUFF survives. Step1202is performed when the red (blue) BUFF survives. Step1210is performed when the red (blue) BUFF does not survive. Step1202. Determine whether the red (blue) BUFF is a red (blue) BUFF in our region. Step1203is performed when the red (blue) BUFF is the red (blue) BUFF in our region. Step1207is performed when the red (blue) BUFF is a red (blue) BUFF in an enemy region. Step1203. Determine whether there is a vision of the red (blue) BUFF. Step1204is performed when there is a vision of the red (blue) BUFF. Step1206is performed when there is no vision of the red (blue) BUFF. Step1204. Determine whether there is an enemy virtual character within the vision. Step1205is performed when there is an enemy virtual character within the vision. Step1206is performed when there is no enemy virtual character within the vision. Step1205. Determine the target prompt information as “Beware of the enemy killing the red (blue) BUFF furtively”. Step1206. Determine the target prompt information as “The enemy is attacking the red (blue) BUFF”. Step1207. Determine whether there is an enemy virtual character within the vision. Step1208is performed when there is an enemy virtual character within the vision. Step1209is performed when there is no enemy virtual character within the vision. Step1208. Determine the target prompt information as “Attacking the red (blue) BUFF in the enemy region”. Step1209. Determine the target prompt information as “Pay attention to the red (blue) BUFF in the enemy region”. Step1210. Determine whether the red (blue) BUFF is a red (blue) BUFF in our region. Step1211is performed when the red (blue) BUFF is the red (blue) BUFF in our region. Step1212is performed when the red (blue) BUFF is a red (blue) BUFF in an enemy region. Step1211. Determine the target prompt information as “The red (blue) BUFF in our region has X seconds left to be born”. Step1212. Determine whether there is a vision during the death of the red (blue) BUFF in the enemy region. Step1213is performed when there is the vision during the death. Step1214is performed when there is no vision during the death. Step1213. Determine the target prompt information as “The red (blue) BUFF in the enemy region has X seconds left to be born”. Step1214. Determine the target prompt information as “Pay attention to the red (blue) BUFF in the enemy region”. For example, as shown inFIG.32, a user applies a slide operation on a UI, a slide starting point of the slide operation being a signal button, and a slide end point of the slide operation being a location of a red BUFF monster in our region on the map viewing control. The multiplayer online battle program determines the target prompt information as “Beware of the enemy killing the red BUFF furtively” with reference to current battle information. Target display element: a virtual character (also referred to as a hero). Referring toFIG.33, the foregoing step303may be implemented into the following steps. Step1301. Determine whether the virtual character is the user itself (the master virtual character). Step1302is performed when the virtual character is the master virtual character. Step1305is performed when the virtual character is not the master virtual character. Step1302. Determine whether a blood volume of the virtual character is lower than X %. Step1303is performed when the blood volume is lower than X %. Step1304is performed when the blood volume is not lower than X %. Step1303. Determine the target prompt information as “I am in a poor condition”. Step1304. Determine the target prompt information as “I need a help”. Step1305. Determine whether the virtual character is a virtual character in our camp. Step1306is performed when the virtual character is a virtual character in our camp. Step1315is performed when the virtual character is not a virtual character in our camp. Step1306. Determine whether the virtual character carries a smite ability and kills more than 3 monsters, and this round of battle has been started for less than x minutes. Step1307is performed if yes; otherwise, step1310is performed. Step1307. Determine whether a blood volume of the virtual character is lower than X %. Step1308is performed when the blood volume is lower than X %. Step1309is performed when the blood volume is not lower than X %. Step1308. Determine the target prompt information as “xxx in our camp in a poor condition”. Step1309. Determine the target prompt information as “I need a help from the assassin”. Step1310. Determine whether a blood volume of the virtual character is lower than X %. Step1311is performed when the blood volume is lower than X %. Step1314is performed when the blood volume is not lower than X %. Step1311. Determine whether the virtual character is in an off-battle state. Step1312is performed when the virtual character is in the off-battle state. Step1313is performed when the virtual character is not in the off-battle state. Step1312. Determine the target prompt information as “xxx in our camp in a poor condition”. Step1313. Determine the target prompt information as “Protect xxx in our camp”. Step1314. Determine the target prompt information as “Pay attention to xxx in our camp”. Step1315. Determine whether a blood volume of the virtual character is lower than X %. Step1303is performed when the blood volume is lower than X %. Step1317is performed when the blood volume is not lower than X %. Step1303. Determine the target prompt information as “A blood volume of xxx in the enemy camp is lower than x %”. Step1304. Determine the target prompt information as “Attacking xxx in the enemy camp”. For example, as shown inFIG.34, a user applies a slide operation on a UI, a slide starting point of the slide operation being a signal button, and a slide end point of the slide operation being an enemy virtual character “Li Bai”. The multiplayer online battle program determines the target prompt information as “A blood volume of the enemy Li Bai is lower than 10%” with reference to current battle information. Target display element: a fort (or a base or a turret) in a map viewing control. Referring toFIG.35, the foregoing step303may be implemented into the following steps. Step1401. Determine whether the fort is a base. Step1402is performed when the fort is a base. Step1407is performed when the fort is not a base. Step1402. Determine whether the fort is the base in our camp. Step1403is performed when the fort is the base in our camp. Step1404is performed when the fort is not the base in our camp. Step1403. Determine the target prompt information as “Attacking the enemy's base”. Step1404. Determine whether the base is under attack. Step1405is performed when the base is under attack. Step1406is performed when the base is not under attack. Step1405. Determine the target prompt information as “The base in our camp is under attack”. Step1406. Determine the target prompt information as “Go back to defense the base”. Step1407. Determine whether the fort is a turret in our camp. Step1408is performed when the fort is a turret in our camp. Step1413is performed when the fort is not a turret in our camp. Step1408. Determine whether the base is under attack. Step1409is performed when the base is under attack. Step1410is performed when the base is not under attack. Step1409. Determine the target prompt information as “The turret in our camp is under attack”. Step1410. Determine whether a blood volume of the virtual character is lower than X %. Step1411is performed when the blood volume is lower than X %. Step1412is performed when the blood volume is not lower than X %. Step1411. Determine the target prompt information as “Protect the turret in our camp”. Step1412. Determine the target prompt information as “Pay attention to the turret in our camp”. Step1413. Determine the target prompt information as “Attacking the turret in the enemy camp”. For example, as shown inFIG.36, a user applies a slide operation on a UI, a slide starting point of the slide operation being a signal button, and a slide end point of the slide operation being the base in our camp. The multiplayer online battle program determines the target prompt information as “Go back to defense the base” with reference to current battle information. Target display element: a grass. Referring toFIG.37, the foregoing step303may be implemented into the following steps. Step1501. Determine whether a selected grass is a grass in which the virtual character is located. Step1502is performed when the grass is the grass in which the virtual character is located. Step1503is performed when the grass is not the grass in which the virtual character is located. Step1502. Determine the target prompt information as “Come and wait in the grass”. Step1503. Determine the target prompt information as “Beware of the enemy in the grass”. Target display element: a bone dragon. The bone dragon is also referred to as a little dragon, and is a monster located opposite to the big dragon on the riverway in some MOBA maps. Referring toFIG.38, the foregoing step303may be implemented into the following steps. Step1601. Determine whether the bone dragon survives. Step1602is performed when the bone dragon survives. Step1607is performed when the bone dragon does not survive. Step1602. Determine whether there is a vision of the bone dragon. Step1603is performed when there is a vision of the bone dragon. Step1606is performed when there is no vision of the bone dragon. Step1603. Determine whether there is an enemy virtual character within the vision. Step1604is performed when there is an enemy virtual character within the vision. Step1605is performed when there is no enemy virtual character within the vision. Step1604. Determine the target prompt information as “Beware of the enemy killing the bone dragon furtively”. Step1605. Determine the target prompt information as “Attacking the bone dragon”. Step1606. Determine the target prompt information as “Pay attention to the bone dragon”. Step1607. Determine the target prompt information as “The bone dragon has x seconds left to be born”. Target display element: another region in a map viewing control. Referring toFIG.39, the foregoing step303may be implemented into the following steps. Step1701. Determine whether a movement distance is shortened in 0.2 second. Step1702is performed when the movement distance is shortened. Step1703is performed when the movement distance is not shortened. Step1702. Determine the target prompt information as “I am on the way, and may arrive in x seconds approximately”. Step1703. Determine the target prompt information as “Caution”. In different embodiments, the foregoing display elements may be further extended, and are not limited to the foregoing limited display elements. The display elements include, but are not limited to, the following types. In an example, the display elements further include three-dimensional models or two-dimensional elements used for decoration in a virtual environment. As shown inFIG.40, when there is a lantern element used for decoration in the virtual environment and a directional operation of a user points to the lantern element, target prompt information of “Happy Spring Festival, and Have All My Wishes!” is transmitted. In an example, the display elements further include elements associated with a real world in the virtual environment. As shown inFIG.41, when an avatar “X God” of a user player exists above another virtual character in the virtual environment, and a directional operation of the user points to the avatar, and if it happens to be X God' birthday, target prompt information of “Happy Birthday to X God!”. In an example, the display elements further include elements associated with user accounts in the virtual environment. As shown inFIG.42, when a nickname “First Li Bai in the national service: Meng Meng” of a game anchor exists above the another virtual character in the virtual environment, and a directional operation of the user points to the nickname, target prompt information of “First Li Bai in the national service: Meng Meng, I am your fans, cool!” is transmitted. In an example, the display elements further include elements associated with teams to which the user accounts belong in the virtual environment. As shown inFIG.43, when there is a flag of a San Gao team on the ground of the virtual environment and a directional operation of a user points to the flag of the San Gao team, target prompt information of “San Gao team is mighty and I support San Gao team˜” is transmitted. In an example, the display elements further include prompt information transmitted by other user accounts. As shown inFIG.44, when prompt information of “First Li Bai in the national service: Meng Meng, I am your fans, cool!” transmitted by another user account exists on the ground of the virtual environment and a directional operation of a user points to the prompt information, target prompt information of “Thanks a lot, I love you!” is transmitted. Arrangement positions of the display elements in the HUD region in the foregoing embodiments are variable. In an example shown inFIG.45, the scoreboard03and the device information region and master virtual character score region04are disposed at an upper region of the minimap control01located at a left side. In different embodiments, more or fewer display elements may exist in the HUD region, and this may be easily considered by a person skilled in the related art based on the foregoing embodiments, and details are not repeated herein. In an example, the prompt information transmission function may be enabled by a user manually. That is, the user manually controls whether to enable the prompt information transmission function. In another example, a prompt information receiving function may be also enabled by a user manually. As shown inFIG.46, in a “basic settings” panel in a “settings” interface, a setting option of “limit teammates' signals” is provided. In a case that the setting option of “limit teammates' signals” is enabled, a current client does not display prompt information transmitted by clients of other teammates any more. In an example, a triggering manner of the directional operation may be enabled by a user manually. As shown inFIG.47, in an “operation settings” panel in a “settings” interface, a setting option of “map signals transmission manners” is provided. When the user checks a “point touch transmission” manner, the triggering manner shown inFIG.6is adopted. When the user checks a “sliding transmission” manner, the triggering manner shown inFIG.8is adopted. The foregoing prompt information may be not only transmitted to friendly users or all users, but also may be displayed in a local client. Therefore, this application further provides the following embodiment. FIG.48is a flowchart of a method for displaying prompt information according to an exemplary embodiment of this application. The method may be performed by any terminal inFIG.1, and the method includes the following steps. Step401. Display a UI of a multiplayer online battle program. The multiplayer online battle program is a program that provides at least two users to control virtual characters to battle in a virtual environment. The virtual environment is a battle environment configured for at least two virtual characters to battle. The multiplayer online battle program may be any one of a military simulation program, a MOBA game, a battle royale shooting game, and an SLG. In this embodiment, an example in which the multiplayer online battle program is a MOBA game is used for description. In an example, as shown inFIG.4, the UI30of the multiplayer online battle program includes a virtual environment image32and an interaction panel region34. The virtual environment image32is an image of the virtual environment observed from a perspective corresponding to a master virtual character36. The master virtual character is a virtual character controlled by a user using the terminal in the virtual environment. The perspective corresponding to the master virtual character may be any one of a first-person perspective, a 45° bird's-eye view, a third-person perspective, and an over-shoulder perspective of the master virtual character. An example uses the 45° bird's-eye view for description in this embodiment. When the master virtual character36moves or rotates, the virtual environment image changes accordingly. The master virtual character36may appear in the virtual environment image or may not appear in the virtual environment image. The interaction panel region34is a UI element superimposed on the virtual environment image32. The interaction panel region34is divided into two types: information display elements used for displaying information and control function elements used for man-machine interaction. The interaction panel region34is also referred to as a HUD region. For example, as shown inFIG.5, the HUD region34includes: a minimap region01, a friend information region02, a scoreboard03, a device information region and master virtual character score region04, a menu region05, a minimap region extension button06, a button control07, a chatting control08, skill buttons09of the master virtual character, an attack skill button10of the master virtual character, a summoner ability11, a restore skill12, a recall skill13, a moving control14, a gold coin region15, and recommended equipment16. The friend information region02, the scoreboard03, and the device information region and master virtual character score region04are the information display elements, and the other elements are the control function elements. The interaction panel region34may include other elements, such as a death panel, a turret-attacking button, and a creep-attacking button, which is not limited in the embodiments. After a user starts a round of battle, the UI of the multiplayer online battle program is displayed. Step402. Receive a directional operation on the UI, the directional operation being an operation for activating a prompt information transmission function and pointing to a target display element in the UI. When the user needs to transmit prompt information, the user applies the directional operation on the UI. The directional operation may be a user operation, or may be an operation combination formed by two or more user operations. The directional operation is an operation for activating a prompt information transmission function and pointing to a target display element in the UI. The target display element is one of a plurality of display elements in the UI. In an example, the directional operation is an operation of a double-tap, a triple-tap, or a long-pressing on the target display element. In another example, the interaction panel region includes a signal control. The directional operation is an operation pointing from the signal control to a target display element in the UI. The signal control is a control for activating the prompt information transmission function. The plurality of display elements in the UI include, but are not limited to, at least one of the following elements. 1. Three-dimensional models forming battle function elements (non-decorative elements and non-visual presentation elements) in the virtual environment. For example, the battle function elements are elements that influence the battle process in the virtual environment. The three-dimensional models include, but are not limited to: virtual characters, turrets, bases, monsters, grass, detection eyes, a big dragon, a small dragon, and the like. 2. The information display elements in the interaction panel region. For example, the information display elements include: the friend information region02, the scoreboard03, the device information region and master virtual character score region04, and the death panel not shown inFIG.5. 3. The control function elements in the interaction panel region. For example, the control function elements include: the minimap region01, the minimap extension button06, the button control07, the chatting control08, the skill buttons09of the master virtual character, the attack skill button10of the master virtual character, the summoner ability11, the moving control14, the gold coin region15, and the recommended equipment16. For example, the control function elements further include a fast signal button. The fast signal button includes, but is not limited to an attack button, a retreat button, and an assembly button (not shown inFIG.5). Step403. Predict target prompt information according to the target display element and battle information. The target display element is a display element selected by the directional operation. The battle information is battle situation information during a round of battle. The battle information includes, but is not limited to the following: a started duration of this round of battle, levels of virtual characters, skill upgrade information, skill cooldown information, summoner ability types, health points (HP) of the virtual characters, HP of turrets, a position relationship between the virtual characters and grass, creep line situations, monsters refresh situations, network information, teammate positions, enemies positions, kill information, death information, date information, festival information, location information, matches information, and camp names of camps. For example, the prompt information is information for performing an information prompt for the teammates, the enemies, or all the virtual characters. A form of the prompt information includes, but is not limited to at least one of text, voices, icons, animation, and vibration feedback. In an example, the prompt information includes information of two types: fact information and intention information. The fact information is information representing existing facts in a current battle, for example, a monster is refreshed, a turret is being attacked, I see an enemy, and the like. The intention information is information representing a strategy intention of the user, for example, pay attention to pushing the creep line on the top lane, beware of ambush in the grass, the economy is so poor and we need to grow, and the like. Step404. Display the target prompt information on the UI. A client displays the target prompt information on the UI of its own. For example, the client displays the target prompt information at a left side region, a central region, or a right side region of the UI. In some embodiments, when a display duration of the target prompt information reaches a display threshold, the display of the target prompt information is canceled. For example, the display of the target prompt information is canceled in a blanking manner. This embodiment may be implemented independently, or may be implemented in combination with the foregoing embodiments, and this is not limited in this application. In some embodiments, the foregoing client (multiplayer online battle program) may be alternatively a program such as a shooting game, a racing game, a battle royale game, or a military simulation program. The client may support at least one operating system of a Windows operating system, an Apple operating system, an Android operating system, an iOS operating system, and a LINUX operating system, and clients on different operating systems may be connected to and communicate with each other. In some embodiments, the foregoing client is a program adapted to a mobile terminal having a touchscreen. In some embodiments, the foregoing client is an application program developed based on a three-dimensional engine. For example, the three-dimensional engine is a Unity engine.FIG.49is a schematic structural diagram of a terminal according to an exemplary embodiment of this application. As shown inFIG.49, the terminal includes a processor11, a touchscreen12, and a memory13. The processor11may be at least one of a single-core processor, a multi-core processor, an embedded chip, and a processor having an instruction running capability. The touchscreen12includes a pressure sensing touchscreen. The pressure sensing touchscreen may measure pressing strength on the touchscreen12. The memory13stores programs executable by the processor11. Schematically, the memory13stores a multiplayer online battle program A, an application program B, an application program C, a touch and pressure sensing module18, and a kernel layer19of an operating system. The multiplayer online battle program A is an application program developed based on a three-dimensional virtual engine17. In some embodiments, the multiplayer online battle program A includes, but is not limited to at least one of a game program, a virtual reality (VR) program, a three-dimensional map program, and a three-dimensional demonstration program that are developed by the three-dimensional virtual engine17. In an example, when an operating system of the terminal is an Android operating system, the multiplayer online battle program A develops by using Java programming language and C# language. In another example, when an operating system of the terminal is an iOS operating system, the multiplayer online battle program A develops by using Object-C programming language and C# language. The three-dimensional virtual engine17is a three-dimensional interactive engine supporting a plurality of operating system platforms. Schematically, the three-dimensional virtual engine may be applied to program development in a plurality of fields such as the game development field, the VR field, and the three-dimensional map field. A specific type of the three-dimensional virtual engine17is not limited in this embodiment of this application. An example in which the three-dimensional virtual engine17is a Unity engine is used in the following embodiment for description. The touch (and pressure) sensing module18is a module configured to receive a touch event (and a pressure touch and control event) reported by a touchscreen drive program191. The touch event includes a type and coordinate values of the touch event. The type of the touch event includes, but is not limited to a touch start event, a touch moving event, and a touch drop event. The pressure touch and control event includes a pressure value and coordinate values of the pressure touch and control event. The coordinate values are configured for indicating a touch and control position of a pressure touch and control operation on a display screen. In some embodiments, an x-axis is established in a horizontal direction of the display screen and a y-axis is established in a vertical direction of the display screen, and therefore, a two-dimensional coordinates system is obtained. Schematically, the kernel layer19includes the touchscreen drive program191and another drive program192. The touchscreen drive program191is a module configured to detect a pressure touch and control event. When detecting the pressure touch and control event, the touchscreen drive program191transfers the pressure touch and control event to the pressure sensing module18. The another drive program192may be a drive program related to the processor11, a drive program related to the memory13, a drive program related to a network component, a drive program related to a sound component, or the like. A person skilled in the art may learn that the foregoing is only an overview of a structure of the terminal. In different embodiments, the terminal may have more or fewer components. For example, the terminal may further include a gravity acceleration sensor, a gyroscope sensor, a power supply, and the like. As shown inFIG.50, the foregoing prompt information transmission function in a code implementation may be divided into four steps. Step41. Receive an operation of a user at a presentation layer. The presentation layer is a layer at which the UI is located. Using a mobile terminal having a touchscreen as an example, a client receives a touch operation of the user at the presentation layer. The client selects a target display element and calls a corresponding sender entity according to the touch operation. The foregoing memory13stores base classes used for implementing the foregoing prompt information transmission function. An interaction procedure control of the entire presentation layer may generally describe behaviors of a user as that the user performs a directional operation (drag, release, and tap) on the UI, activates a prompt information transmission function, and transmits prompt information. Different interaction control entities may be abstracted according to operation behaviors of the user. As shown inFIG.51, CUICommunicateUnitTrigger: It is responsible for receiving a directional operation such as a long press operation, a drag operation, a release operation, and the like of the user, and acts as a trigger of the prompt information transmission function and plays a role of transmitting messages to a master controller. CommunicateController: It acts as the master controller, and is responsible for receiving the messages transmitted from the CUICommunicateUnitTrigger and executing corresponding procedures; CommunicateSystemView: It is responsible for controlling displays of some basic UI in the prompt information transmission function, for example, updating an auxiliary line (or a movement route); and CUICommunicateUnitSender: It is responsible for identifying an entity that transmits prompt information, and behaviors of the CUICommunicateUnitSender is driven by the CommunicateController. Step42. Call a sender entity. As shown inFIG.52, the memory13stores a unit base class CBaseSenderUnit used for triggering to transmit a signal, and several functions (OnActive, OnDeactive, OnEnter, OnExit, and OnSend) defined by the unit base class CBaseSenderUnit are all virtual functions, and derived classes of the functions are used for implementing a specific transmission behavior of a prompt signal. In an exemplary round of battle, display elements that can trigger prompt information may be classified into two types: three-dimensional battle field elements in a three-dimensional virtual environment and two-dimensional UI elements on a HUD region. In the code implementation, the two types of objects (the three-dimensional battle field elements and the two-dimensional UI elements) may be abstracted. When the user operates, entities of different types perform corresponding logic behaviors respectively. For an exemplary reference, when the directional operation determines the three-dimensional battle field element as a target display element, a logic behavior of calling a sender entity unit CSceneSenderUnit of the three-dimensional battle field element is triggered. When the directional operation determines the two-dimensional UI element as a target display element, a logic behavior of calling a sender entity unit CUISenderUnit of the two-dimensional UI element is triggered. Other logic behaviors may be derived from the two logic behaviors, for example, a minimap signal sender entity unit CUIMinimapSenderUnit inherited from the CUISenderUnit When the directional operation determines a minimap (or a middle map) element as a target display element, a logic behavior of calling the minimap signal sender entity unit CUIMinimapSenderUnit is triggered. Step43. Call a behavior tree to perform a logic determination. When a player performs a directional operation (sliding/releasing/tapping) on the target display element, the logic determination needs to be performed with reference to real-time battle situation information to determine specific transmitted target prompt information. The logic determination involves a large quantity of logic branches of “if-else”, and reference may be made to the behavior trees mentioned in the foregoing embodiments. A code implementation of the behavior tree may refer to a behavior tree behaviac of an open source component. Step44. Broadcast prompt information. In an example, the client transmits a frame synchronization signal to a server, the frame synchronization signal carrying the target prompt information. The server transmits the frame synchronization signal to other clients corresponding to the teammate virtual characters (the master virtual character itself is included optionally) of the master virtual character. Alternatively, the server transmits the frame synchronization signal to other clients of all the virtual characters in the battle. The other clients display or play the target prompt information according to the frame synchronization signal. Technical improvements of this application bring an innovative design direction of a mobile game combat communication system that breaks through conventional mobile terminals, which has great growth, plasticity, and expansion. Every game of mobile terminals involving fierce battle situations has different degrees of appeal for effects of this system. In addition, a manner of “simple interaction+intelligent determination” enables the user to implement very intelligent teammates communication effects in a very simple man-machine interaction manner. Apparatus embodiments of this application are described below, where the apparatus embodiments correspond to the foregoing method embodiments. For a part that is not described in detail in the apparatus embodiments, refer to the foregoing method embodiments. FIG.53is a block diagram of an apparatus for transmitting prompt information in a multiplayer online battle program according to an exemplary embodiment of this application. The apparatus may be implemented as an entire terminal or a part of a terminal by using software, hardware, or a combination thereof. The apparatus includes:a display module5220, configured to display a UI of the multiplayer online battle program;an interaction module5240, configured to receive a directional operation on the UI, the directional operation being an operation for activating a prompt information transmission function and pointing to a target display element in the UI, the target display element being one of a plurality of display elements in the UI;a prediction module5260, configured to predict target prompt information according to the target display element and battle information; anda transmission module5280, configured to transmit the target prompt information to clients of teammate virtual characters of a master virtual character or all virtual characters in a battle. In some embodiments, the UI includes an interaction panel region, the interaction panel region including a signal control; and the directional operation is an operation pointing from the signal control to a target display element in the UI. In some embodiments, the signal control includes a button control; the directional operation includes a first touch operation and a second touch operation; the interaction module5240is configured to receive a first touch operation applied on the button control on the UI; the display module5220is configured to display one or more candidate display elements on the UI; and the interaction module5240is configured to receive the second touch operation applied on the target display element on the UI. In some embodiments, the display module5220is configured to display the one or more candidate display elements on the UI in a highlight manner, the highlight manner including at least one of the following display manners: a target color display manner, an overlay masking display manner, a highlight display manner, and a contour display manner. In some embodiments, the signal control includes a button control; the directional operation includes a slide operation; and the interaction module5240is configured to receive a slide operation on the UI, a slide starting point of the slide operation being the button control, and a slide end point of the slide operation being the target display element. In some embodiments, the display module5220is further configured to display an auxiliary line or a movement route pointing from the slide starting point to the slide end point on the UI when the slide operation is received on the button control. In some embodiments, the signal control includes a map expansion control; the directional operation includes a third touch operation and a fourth touch operation; the interaction module5240is configured to receive the third touch operation applied on the map expansion control on the UI; the display module5220is configured to display a map viewing control of a virtual environment on the UI according to the third touch operation; and the interaction module5240is configured to receive the fourth touch operation applied on the target display element on the map viewing control. In some embodiments, the UI includes a virtual environment image and an interaction panel region, the virtual environment image being an image of a virtual environment observed from a perspective corresponding to a master virtual character, the virtual environment being a battle environment configured for at least two virtual characters to battle; and the display elements include at least one of the following elements:three-dimensional models or two-dimensional elements forming battle function elements in the virtual environment;information display elements in the interaction panel region; andcontrol function elements in the interaction panel region. In some embodiments, the display elements further include at least one of the following elements:three-dimensional models or two-dimensional elements used for decoration in the virtual environment;elements associated with a real world in the virtual environment;elements associated with user accounts in the virtual environment; andelements associated with teams to which the user accounts belong in the virtual environment. In some embodiments, the display elements further include at least one of the following elements:prompt information transmitted by other user accounts. In some embodiments, the prediction module5260is configured to query a behavior tree for the target prompt information according to the target display element and the battle information. the behavior tree including a correspondence between the display elements, the battle information, and the prompt information. In some embodiments, the target display element includes skill class display elements; the battle information includes a skill availability status; and the prediction module5260is configured to determine, when the target display element is the skill class display element and a skill availability status corresponding to the target display element is unavailable, first target prompt information used for indicating that a skill is unavailable; and determine, when the target display element is the skill class display element and a skill availability status corresponding to the target display element is available, second target prompt information used for indicating that a skill is available. In some embodiments, the target display element includes skill class display elements; the battle information includes a skill CD time; and the prediction module5260is configured to determine, when the target display element is the skill class display element and a skill CD time corresponding to the target display element is valid, third target prompt information used for indicating the skill CD time. In some embodiments, the target display element includes resource class display elements; the battle information includes vision information of the resource class display element; and the prediction module5260is configured to determine, when the target display element is the resource class display element and the vision information corresponding to the target display element has vision of an enemy, fourth target prompt information used for indicating that the enemy is obtaining the resource. In some embodiments, the target display element includes resource class display elements; the battle information includes refresh information of the resource class display element; and The prediction module5260is configured to determine, when the target display element is the resource class display element and the refresh information is valid, fifth target prompt information used for indicating a remaining refresh time of the resource class display element. In some embodiments, the target display element includes virtual character class display elements; the battle information includes a state of the virtual character class display element; and the prediction module5260is configured to determine, when the target display element is the virtual character class display element and the state of the target display element is a first designated state, sixth target prompt information used for indicating that the virtual character class display element is in the first designated state, the state including at least one of a blood volume state, a magic state, a recall state, a moving state, an attack state, an attacked state, an equipment state, a level state, and an external wearing state. In some embodiments, the target display element includes construction class display elements; the battle information includes a state of the construction class display element; and the prediction module5260is configured to determine, when the target display element is the construction class display element and the state of the target display element is a second designated state, seventh target prompt information used for indicating that the construction class display element is in the second designated state. In some embodiments, the target display element includes a network information display element; the battle information includes network speed information; and the prediction module5260is configured to determine, when the target display element is the network information display element and a network speed of the network speed information is less than a first threshold, eighth target prompt information used for indicating a network speed state. In some embodiments, the target display element includes a hardware performance display element; the battle information includes a hardware working performance parameter; and the prediction module5260is configured to determine, when the target display element is the hardware performance display element and the hardware working performance parameter is less than a second threshold, ninth target prompt information used for indicating a network speed state. In some embodiments, the target display element includes a prompt information display element; the battle information includes message content in the prompt information display element; and the prediction module5260is configured to determine, when the target display element is the prompt information display element, tenth target prompt information used for automatically replying to the message content. In some embodiments, the UI further includes a moving control, the moving control being a control configured to control the master virtual character to move in the virtual environment, the moving control and the signal control being located at two marginal regions far away from each other on the UI. In this application, the term “unit” or “module” refers to a computer program or part of the computer program that has a predefined function and works together with other related parts to achieve a predefined goal and may be all or partially implemented by using software, hardware (e.g., processing circuitry and/or memory configured to perform the predefined functions), or a combination thereof. Each unit or module can be implemented using one or more processors (or processors and memory). Likewise, a processor (or processors and memory) can be used to implement one or more modules or units. Moreover, each module or unit can be part of an overall module that includes the functionalities of the module or unit.FIG.54is a block diagram of an apparatus for displaying prompt information in a multiplayer online battle program according to an exemplary embodiment of this application. The apparatus may be implemented as an entire terminal or a part of a terminal by using software, hardware, or a combination thereof. The apparatus may be combined with the apparatus shown inFIG.53into the same apparatus. The apparatus includes:a display module5220, configured to display a UI of the multiplayer online battle program; the UI including a virtual environment image and an interaction panel region;an interaction module5240, configured to receive a directional operation on the UI, the directional operation being an operation for activating a prompt information transmission function and pointing to a target display element in the UI, the target display element being one of a plurality of display elements in the UI; anda prediction module5260, configured to predict target prompt information according to the target display element and battle information;the display module5220being configured to display the target prompt information on the UI. In some embodiments, the UI includes an interaction panel region, the interaction panel region including a signal control; and the directional operation is an operation pointing from the signal control to a target display element in the UI. In some embodiments, the signal control includes a button control; the directional operation includes a first touch operation and a second touch operation; the interaction module5240is configured to receive a first touch operation applied on the button control on the UI; the display module5220is configured to display one or more candidate display elements on the UI; and the interaction module5240is configured to receive the second touch operation applied on the target display element in the candidate display elements on the UI. In some embodiments, the display module5220is configured to display the one or more candidate display elements on the UI in a highlight manner, the highlight manner including at least one of the following display manners: a target color display manner, an overlay masking display manner, a highlight display manner, and a contour display manner. In some embodiments, the signal control includes a button control; the directional operation includes a slide operation; and the interaction module5240is configured to receive a slide operation on the UI, a slide starting point of the slide operation being the button control, and a slide end point of the slide operation being the target display element. In some embodiments, the display module5220is further configured to display an auxiliary line or a movement route pointing from the slide starting point to the slide end point on the UI when the slide operation is received on the button control. In some embodiments, the signal control includes a map expansion control; the directional operation includes a third touch operation and a fourth touch operation; the interaction module5240is configured to receive the third touch operation applied on the map expansion control on the UI; the display module5220is configured to display a map viewing control of a virtual environment on the UI according to the third touch operation; and the interaction module5240is configured to receive the fourth touch operation applied on the target display element on the map viewing control. In some embodiments, the UI includes a virtual environment image and an interaction panel region, the virtual environment image being an image of a virtual environment observed from a perspective corresponding to a master virtual character, the virtual environment being a battle environment configured for at least two virtual characters to battle; and the display elements include at least one of the following elements:three-dimensional models or two-dimensional elements forming battle function elements in the virtual environment;information display elements in the interaction panel region; andcontrol function elements in the interaction panel region. In some embodiments, the display elements further include at least one of the following elements:three-dimensional models or two-dimensional elements used for decoration in the virtual environment;elements associated with a real world in the virtual environment;elements associated with user accounts in the virtual environment; andelements associated with teams to which the user accounts belong in the virtual environment. In some embodiments, the display elements further include at least one of the following elements:prompt information transmitted by other user accounts. In some embodiments, the prediction module5260is configured to query a behavior tree for the target prompt information according to the target display element and the battle information. the behavior tree including a correspondence between the display elements, the battle information, and the prompt information. In some embodiments, the target display element includes skill class display elements; the battle information includes a skill availability status; and the prediction module5260is configured to determine, when the target display element is the skill class display element and a skill availability status corresponding to the target display element is unavailable, first target prompt information used for indicating that a skill is unavailable; and determine, when the target display element is the skill class display element and a skill availability status corresponding to the target display element is available, second target prompt information used for indicating that a skill is available. In some embodiments, the target display element includes skill class display elements; the battle information includes a skill CD time; and the prediction module5260is configured to determine, when the target display element is the skill class display element and a skill CD time corresponding to the target display element is valid, third target prompt information used for indicating the skill CD time. In some embodiments, the target display element includes resource class display elements; the battle information includes vision information of the resource class display element; and the prediction module5260is configured to determine, when the target display element is the resource class display element and the vision information corresponding to the target display element has vision of an enemy, fourth target prompt information used for indicating that the enemy is obtaining the resource. In some embodiments, the target display element includes resource class display elements; the battle information includes refresh information of the resource class display element; and The prediction module5260is configured to determine, when the target display element is the resource class display element and the refresh information is valid, fifth target prompt information used for indicating a remaining refresh time of the resource class display element. In some embodiments, the target display element includes virtual character class display elements; the battle information includes a state of the virtual character class display element; and the prediction module5260is configured to determine, when the target display element is the virtual character class display element and the state of the target display element is a first designated state, sixth target prompt information used for indicating that the virtual character class display element is in the first designated state, the state including at least one of a blood volume state, a magic state, a recall state, a moving state, an attack state, an attacked state, an equipment state, a level state, and an external wearing state. In some embodiments, the target display element includes construction class display elements; the battle information includes a state of the construction class display element; and the prediction module5260is configured to determine, when the target display element is the construction class display element and the state of the target display element is a second designated state, seventh target prompt information used for indicating that the construction class display element is in the second designated state. In some embodiments, the target display element includes a network information display element; the battle information includes network speed information; and the prediction module5260is configured to determine, when the target display element is the network information display element and a network speed of the network speed information is less than a first threshold, eighth target prompt information used for indicating a network speed state. In some embodiments, the target display element includes a hardware performance display element; the battle information includes a hardware working performance parameter; and the prediction module5260is configured to determine, when the target display element is the hardware performance display element and the hardware working performance parameter is less than a second threshold, ninth target prompt information used for indicating a network speed state. In some embodiments, the target display element includes a prompt information display element; the battle information includes message content in the prompt information display element; and the prediction module5260is configured to determine, when the target display element is the prompt information display element, tenth target prompt information used for automatically replying to the message content. In some embodiments, the UI further includes a moving control, the moving control being a control configured to control the master virtual character to move in the virtual environment, the moving control and the signal control being located at two marginal regions far away from each other on the UI. This application further provides a terminal, including a processor and a memory, the memory storing at least one instruction, the at least one instruction being executed by the processor to implement the method for transmitting prompt information in a multiplayer online battle program, and/or, the method for displaying prompt information in a multiplayer online battle program provided in the foregoing method embodiments. The terminal may be a terminal provided inFIG.55below. FIG.55shows a structural block diagram of a terminal5500according to an exemplary embodiment of this application. The terminal5500may be a smartphone, a tablet computer, an MP3 player, an MP4 player, a notebook computer, or a desktop computer. The terminal5500may also be referred to as other names such as user equipment, a portable terminal, a laptop terminal, or a desktop terminal. Generally, the terminal5500includes a processor5501and a memory5502. The processor5501may include one or more processing cores, for example, a 4-core processor or an 8-core processor. The processor5501may be implemented in at least one hardware form of a digital signal processor (DSP), a field-programmable gate array (FPGA), and a programmable logic array (PLA). The processor5501may also include a main processor and a coprocessor. The main processor is a processor configured to process data in an awake state, and is also referred to as a central processing unit (CPU). The coprocessor is a low power consumption processor configured to process the data in a standby state. In some embodiments, the processor5501may be integrated with a graphics processing unit (GPU). The GPU is configured to render and draw content that needs to be displayed on a display screen. In some embodiments, the processor5501may further include an artificial intelligence (AI) processor. The AI processor is configured to process computing operations related to machine learning. The memory5502may include one or more computer-readable storage media. The computer-readable storage medium may be non-transient. The memory5502may further include a high-speed random access memory and a nonvolatile memory, for example, one or more disk storage devices or flash storage devices. In some embodiments, a non-transitory computer-readable storage medium in the memory5502is configured to store at least one instruction, the at least one instruction being configured to be executed by the processor5501to implement the method for transmitting prompt information in a multiplayer online battle program, and/or, the method for displaying prompt information in a multiplayer online battle program provided in the method embodiments of this application. In some embodiments, the terminal5500may optionally include: a peripheral interface5503and at least one peripheral. The processor5501, the memory5502, and the peripheral interface5503may be connected by using a bus or a signal cable. Each peripheral may be connected to the peripheral interface5503by using a bus, a signal cable, or a circuit board. Specifically, the peripheral device includes: at least one of a radio frequency (RF) circuit5504, a display screen5505, a camera component5506, an audio circuit5507, a positioning component5508, and a power supply5509. The peripheral interface5503may be configured to connect the at least one peripheral related to input/output (I/O) to the processor5501and the memory5502. In some embodiments, the processor5501, the memory5502and the peripheral device interface5503are integrated on a same chip or circuit board. In some other embodiments, any one or two of the processor5501, the memory5502, and the peripheral device interface5503may be implemented on a single chip or circuit board. This is not limited in this embodiment. The RF circuit5504is configured to receive and transmit an RF signal, also referred to as an electromagnetic signal. The RF circuit5504communicates with a communication network and other communication devices through the electromagnetic signal. The RF circuit5504converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. In some embodiments, the RF circuit5504includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chip set, a subscriber identity module card, and the like. The RF circuit5504may communicate with another terminal by using at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: a world wide web, a metropolitan area network, an intranet, generations of mobile communication networks (2G, 3G, 4G, and 5G), a wireless local area network, and/or a Wi-Fi network. In some embodiments, the RF 5504 may further include a circuit related to NFC, which is not limited in this application. The display screen5505is configured to display a user interface (UI). The UI may include a graph, text, an icon, a video, and any combination thereof. When the display screen5505is a touch display screen, the display screen5505is also capable of acquiring a touch signal on or above a surface of the display screen5505. The touch signal may be inputted to the processor5501as a control signal for processing. In this case, the display screen5505may be further configured to provide a virtual button and/or a virtual keyboard that are/is also referred to as a soft button and/or a soft keyboard. In some embodiments, there may be one display screen5505, disposed on a front panel of the terminal5500. In some other embodiments, there may be at least two display screens5505, disposed on different surfaces of the terminal5500respectively or in a folded design. In still other embodiments, the display screen5505may be a flexible display screen, disposed on a curved surface or a folded surface of the terminal5500. Even, the display screen5505may be further set in a non-rectangular irregular pattern, namely, a special-shaped screen. The display screen5505may be prepared by using materials such as a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like. The camera component5506is configured to acquire images or videos. In some embodiments, the camera component5506includes a front-facing camera and a rear-facing camera. Generally, the front-facing camera is disposed on the front panel of the terminal, and the rear-facing camera is disposed on a back surface of the terminal. In some embodiments, there are at least two rear cameras, which are respectively any of a main camera, a depth-of-field camera, a wide-angle camera, and a telephoto camera, to achieve background blur through fusion of the main camera and the depth-of-field camera, panoramic photographing and virtual reality (VR) photographing through fusion of the main camera and the wide-angle camera, or other fusion photographing functions. In some embodiments, the camera component5506may further include a flash. The flash may be a monochrome temperature flash, or may be a double color temperature flash. The double color temperature flash refers to a combination of a warm light flash and a cold light flash, and may be used for light compensation under different color temperatures. The audio circuit5507may include a microphone and a speaker. The microphone is configured to acquire sound waves of a user and an environment, and convert the sound waves into an electrical signal to input to the processor5501for processing, or input to the radio frequency circuit5504for implementing voice communication. For the purpose of stereo sound acquisition or noise reduction, there may be a plurality of microphones, respectively disposed at different parts of the terminal5500. The microphone may further be an array microphone or an omni-directional acquisition type microphone. The speaker is configured to convert electrical signals from the processor5501or the RF circuit5504into sound waves. The speaker may be a conventional film speaker, or may be a piezoelectric ceramic speaker. When the speaker is the piezoelectric ceramic speaker, the speaker not only can convert an electric signal into acoustic waves audible to a human being, but also can convert an electric signal into acoustic waves inaudible to a human being, for ranging and other purposes. In some embodiments, the audio circuit5507may also include an earphone jack. The positioning component5508is configured to determine a current geographic location of the terminal5500, to implement a navigation or a location-based service (LBS). The positioning component5508may be a positioning component based on the global positioning system (GPS) of the United States, the BeiDou System of China, and the GALILEO System of Russia. The power supply5509is configured to supply power to components in the terminal5500. The power supply5509may be an alternating-current power supply, a direct-current power supply, a disposable battery, or a rechargeable battery. When the power supply5509includes a rechargeable battery, and the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired circuit, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may be further configured to support a fast charging technology. In some embodiments, the terminal5500may further include one or more sensors5510. The one or more sensors5510include, but are not limited to: an acceleration sensor5511, a gyroscope sensor5512, a pressure sensor5513, a fingerprint sensor5514, an optical sensor5515, and a proximity sensor5516. The acceleration sensor5511may detect a magnitude of acceleration on three coordinate axes of a coordinate system established by the terminal5500. For example, the acceleration sensor5511may be configured to detect components of gravity acceleration on the three coordinate axes. The processor5501may control, according to a gravity acceleration signal acquired by the acceleration sensor5511, the touch display screen5505to display the UI in a frame view or a portrait view. The acceleration sensor5511may be further configured to acquire motion data of a game or a user. The gyroscope sensor5512may detect a body direction and a rotation angle of the terminal5500. The gyroscope sensor5512may cooperate with the acceleration sensor5511to acquire a 3D action by the user on the terminal5500. The processor5501may implement the following functions according to data acquired by the gyroscope sensor5512: motion sensing (for example, the UI is changed according to a tilt operation of a user), image stabilization during shooting, game control, and inertial navigation. The pressure sensor5513may be disposed at a side frame of the terminal5500and/or a lower layer of the touch display screen5505. When the pressure sensor5513is disposed at the side frame of the terminal5500, a holding signal of the user on the terminal5500may be detected. The processor5501performs left and right hand recognition or a quick operation according to the holding signal acquired by the pressure sensor5513. When the pressure sensor5513is disposed on the low layer of the touch display screen5505, the processor5501controls, according to a pressure operation of the user on the touch display screen5505, an operable control on the UI. The operable control includes at least one of a button control, a scroll-bar control, an icon control, and a menu control. The fingerprint sensor5514is configured to acquire a user's fingerprint, and the processor5501identifies a user's identity according to the fingerprint acquired by the fingerprint sensor5514, or the fingerprint sensor5514identifies a user's identity according to the acquired fingerprint. When identifying that the user's identity is a trusted identity, the processor5501authorizes the user to perform related sensitive operations. The sensitive operations include: unlocking a screen, viewing encrypted information, downloading software, paying, changing a setting, and the like. The fingerprint sensor5514may be disposed on a front surface, a back surface, or a side surface of the terminal5500. When a physical button or a vendor logo is disposed on the terminal5500, the fingerprint5514may be integrated with the physical button or the vendor logo. The optical sensor5515is configured to acquire ambient light intensity. In an embodiment, the processor5501may control the display brightness of the touch display screen5505according to the ambient light intensity acquired by the optical sensor5515. Specifically, when the ambient light intensity is relatively high, the display brightness of the touch display screen5505is increased. When the ambient light intensity is relatively low, the display brightness of the touch display screen5505is decreased. In another embodiment, the processor5501may further dynamically adjust a camera parameter of the camera component5506according to the ambient light intensity acquired by the optical sensor5515. The proximity sensor5516, also referred to as a distance sensor, is usually disposed on the front panel of the terminal5500. The proximity sensor5516is configured to acquire a distance between the user and the front surface of the terminal5500. In an embodiment, when the proximity sensor5516detects that the distance between the user and the front surface of the terminal5500gradually becomes small, the touch display screen5505is controlled by the processor5501to switch from a screen-on state to a screen-off state. When the proximity sensor5516detects that the distance between the user and the front surface of the terminal5500gradually increases, the touch display screen5505is controlled by the processor5501to switch from the screen-off state to the screen-on state. A person skilled in the art may understand that the structure shown inFIG.55does not constitute a limitation to the terminal5500, and the terminal may include more or fewer components than those shown in the figure, or some components may be combined, or a different component arrangement may be used. The memory further includes one or more programs. The one or more programs are stored in the memory. The one or more programs include a program for performing the method for transmitting prompt information in a multiplayer online battle program, and/or the method for displaying prompt information in a multiplayer online battle program provided in the embodiments of this application. This application provides a computer-readable storage medium, storing at least one instruction, the at least one instruction being executed by a processor to implement the method for transmitting prompt information in a multiplayer online battle program, and/or, the method for displaying prompt information in a multiplayer online battle program provided in the foregoing method embodiments. The sequence numbers of the foregoing embodiments of this application are merely for description purpose but do not imply the preference among the embodiments. A person of ordinary skill in the art may understand that all or some of the steps of the foregoing embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware. The program may be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic disk, an optical disc, or the like. The foregoing descriptions are merely optional embodiments of this application, but are not intended to limit this application. Any modification, equivalent replacement, or improvement made within the spirit and principle of this application shall fall within the protection scope of this application.
141,573
11857879
DETAILED DESCRIPTION The example methods and systems describe a visual identifier system for using a visual identifier to launch a computer application. The visual identifier system accesses a visual identifier which includes encoded data representing a computer application. The visual identifier is configurable by a user. For example, the visual identifier may be a Quick Response (QR) code or a bar code. In some examples, the visual identifier is any suitable machine-readable optical label that encodes data. For example, the visual identifier may contain data points to a computer application. The visual identifier system may associate multiple visual identifiers with the same computer application. For example, the visual identifier system may access a second visual identifier which includes encoded data that represents the computer application. The visual identifier system performs a visual search of the visual identifier. For example, the visual identifier system may scan the visual identifier using an image capture device. In some examples, the visual identifier system scans the visual identifier on a computing device using image processing software installed on the computing device. In some examples, the visual identifier system verifies the computing device. In some examples, the visual identifier system receives an indication comprising a successful verification. In response to receiving the successful indication, the visual identifier system displays the application menu. In some examples, the visual identifier system receives an indication comprising a failed verification. The visual identifier system may receive an indication of a failed verification if the computing device is blacklisted, if the location of the computing device is blacklist, or if any other property of the computing device is deemed unable to support the computer application. In response to receiving an indication of a failed verification, the visual identifier system presents a notification on the graphical user interface (GUI) of the computing device. The notification may appear as a pop-up window (e.g., pop up window stating, “Sorry! It looks like this experience doesn't work on your device”). It is understood that any form of a visual or audible notification may be used. In response to performing the visual search of the visual identifier, the visual identifier system displays an application menu within the GUI of a computing device. The application menu may include user interface elements (e.g., buttons, text fields, checkboxes, sliders, icons, tags, message boxes, pagination, etc.). In some examples, selection of a “Play” button causes the visual identifier system to run the application. Selection of the “Play” button may, in some examples, cause the visual identifier system to navigate to a predefined user window within the application (e.g., second user interface window of the computer application). In another example, selection of a “Share” button causes the visual identifier system to share the application as an ephemeral message within a messaging application. In some examples, selection of the “Share” button causes the visual identifier system to share the visual identifier as an ephemeral message within the messaging application. In some examples, selection of a “Cancel” button causes the visual identifier system to close the application menu and display a different user interface system (e.g., modify the graphical user interface). Further details regarding the visual identifier system are described below. The computer application described may include computer games, media overlays, or any suitable computer application. Networked Computing Environment FIG.1is a block diagram showing an example messaging system100for exchanging data (e.g., messages and associated content) over a network. The messaging system100includes multiple instances of a client device102, each of which hosts a number of applications, including a messaging client104. Each messaging client104is communicatively coupled to other instances of the messaging client104and a messaging server system108via a network106(e.g., the Internet). A messaging client104is able to communicate and exchange data with another messaging client104and with the messaging server system108via the network106. The data exchanged between messaging client104, and between a messaging client104and the messaging server system108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video or other multimedia data). The messaging server system108provides server-side functionality via the network106to a particular messaging client104. While certain functions of the messaging system100are described herein as being performed by either a messaging client104or by the messaging server system108, the location of certain functionality either within the messaging client104or the messaging server system108may be a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the messaging server system108but to later migrate this technology and functionality to the messaging client104where a client device102has sufficient processing capacity. The messaging server system108supports various services and operations that are provided to the messaging client104. Such operations include transmitting data to, receiving data from, and processing data generated by the messaging client104. This data may include message content, client device information, geolocation information, media augmentation and overlays, message content persistence conditions, social network information, and live event information, as examples. Data exchanges within the messaging system100are invoked and controlled through functions available via user interfaces (UIs) of the messaging client104. Turning now specifically to the messaging server system108, an Application Program Interface (API) server110is coupled to, and provides a programmatic interface to, application servers212. The application servers212are communicatively coupled to a database server118, which facilitates access to a database120that stores data associated with messages processed by the application servers212. Similarly, a web server126is coupled to the application servers212, and provides web-based interfaces to the application servers212. To this end, the web server126processes incoming network requests over the Hypertext Transfer Protocol (HTTP) and several other related protocols. The Application Program Interface (API) server110receives and transmits message data (e.g., commands and message payloads) between the client device102and the application servers212. Specifically, the Application Program Interface (API) server110provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the messaging client104in order to invoke functionality of the application servers212. The Application Program Interface (API) server110exposes various functions supported by the application servers212, including account registration, login functionality, the sending of messages, via the application servers212, from a particular messaging client104to another messaging client104, the sending of media files (e.g., images or video) from a messaging client104to a messaging server114, and for possible access by another messaging client104, the settings of a collection of media data (e.g., story), the retrieval of a list of friends of a user of a client device102, the retrieval of such collections, the retrieval of messages and content, the addition and deletion of entities (e.g., friends) to an entity graph (e.g., a social graph), the location of friends within a social graph, and opening an application event (e.g., relating to the messaging client104). The application servers212host a number of server applications and subsystems, including for example a messaging server114, an image processing server116, and a social network server122. The messaging server114implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content) included in messages received from multiple instances of the messaging client104. As will be described in further detail, the text and media content from multiple sources may be aggregated into collections of content (e.g., called stories or galleries). These collections are then made available to the messaging client104. Other processor and memory intensive processing of data may also be performed server-side by the messaging server114, in view of the hardware requirements for such processing. The application servers212also include an image processing server116that is dedicated to performing various image processing operations, typically with respect to images or video within the payload of a message sent from or received at the messaging server114. The social network server122supports various social networking functions and services and makes these functions and services available to the messaging server114. To this end, the social network server122maintains and accesses an entity graph306(as shown inFIG.3) within the database120. Examples of functions and services supported by the social network server122include the identification of other users of the messaging system100with which a particular user has relationships or is “following,” and also the identification of other entities and interests of a particular user. A visual identifier system124uses visual identifiers to launch a computer application. For example, the visual identifier system performs a visual search on a visual identifier to launch a computer application. System Architecture FIG.2is a block diagram illustrating further details regarding the messaging system100, according to some examples. Specifically, the messaging system100is shown to comprise the messaging client104and the application servers212. The messaging system100embodies a number of subsystems, which are supported on the client-side by the messaging client104and on the sever-side by the application servers212. These subsystems include, for example, an ephemeral timer system202, a collection management system204, an augmentation system206, a map system208, a game system210, and a visual identifier system124. The ephemeral timer system202is responsible for enforcing the temporary or time-limited access to content by the messaging client104and the messaging server114. The ephemeral timer system202incorporates a number of timers that, based on duration and display parameters associated with a message, or collection of messages (e.g., a story), selectively enable access (e.g., for presentation and display) to messages and associated content via the messaging client104. Further details regarding the operation of the ephemeral timer system202are provided below. The collection management system204is responsible for managing sets or collections of media (e.g., collections of text, image video, and audio data). A collection of content (e.g., messages, including images, video, text, and audio) may be organized into an “event gallery” or an “event story.” Such a collection may be made available for a specified time period, such as the duration of an event to which the content relates. For example, content relating to a music concert may be made available as a “story” for the duration of that music concert. The collection management system204may also be responsible for publishing an icon that provides notification of the existence of a particular collection to the user interface of the messaging client104. The collection management system204furthermore includes a curation interface214that allows a collection manager to manage and curate a particular collection of content. For example, the curation interface214enables an event organizer to curate a collection of content relating to a specific event (e.g., delete inappropriate content or redundant messages). Additionally, the collection management system204employs machine vision (or image recognition technology) and content rules to automatically curate a content collection. In certain examples, compensation may be paid to a user for the inclusion of user-generated content into a collection. In such cases, the collection management system204operates to automatically make payments to such users for the use of their content. The augmentation system206provides various functions that enable a user to augment (e.g., annotate or otherwise modify or edit) media content associated with a message. For example, the augmentation system206provides functions related to the generation and publishing of media overlays for messages processed by the messaging system100. The augmentation system206operatively supplies a media overlay or augmentation (e.g., an image filter) to the messaging client104based on a geolocation of the client device102. In another example, the augmentation system206operatively supplies a media overlay to the messaging client104based on other information, such as social network information of the user of the client device102. A media overlay may include audio and visual content and visual effects. Examples of audio and visual content include pictures, texts, logos, animations, and sound effects. An example of a visual effect includes color overlaying. The audio and visual content or the visual effects can be applied to a media content item (e.g., a photo) at the client device102. For example, the media overlay may include text or image that can be overlaid on top of a photograph taken by the client device102. In another example, the media overlay includes an identification of a location overlay (e.g., Venice beach), a name of a live event, or a name of a merchant overlay (e.g., Beach Coffee House). In another example, the augmentation system206uses the geolocation of the client device102to identify a media overlay that includes the name of a merchant at the geolocation of the client device102. The media overlay may include other indicia associated with the merchant. The media overlays may be stored in the database120and accessed through the database server118. In some examples, the augmentation system206provides a user-based publication platform that enables users to select a geolocation on a map and upload content associated with the selected geolocation. The user may also specify circumstances under which a particular media overlay should be offered to other users. The augmentation system206generates a media overlay that includes the uploaded content and associates the uploaded content with the selected geolocation. In other examples, the augmentation system206provides a merchant-based publication platform that enables merchants to select a particular media overlay associated with a geolocation via a bidding process. For example, the augmentation system206associates the media overlay of the highest bidding merchant with a corresponding geolocation for a predefined amount of time. The map system208provides various geographic location functions, and supports the presentation of map-based media content and messages by the messaging client104. For example, the map system208enables the display of user icons or avatars (e.g., stored in profile data308) on a map to indicate a current or past location of “friends” of a user, as well as media content (e.g., collections of messages including photographs and videos) generated by such friends, within the context of a map. For example, a message posted by a user to the messaging system100from a specific geographic location may be displayed within the context of a map at that particular location to “friends” of a specific user on a map interface of the messaging client104. A user can furthermore share his or her location and status information (e.g., using an appropriate status avatar) with other users of the messaging system100via the messaging client104, with this location and status information being similarly displayed within the context of a map interface of the messaging client104to selected users. The game system210provides various gaming functions within the context of the messaging client104. The messaging client104provides a game interface providing a list of available games that can be launched by a user within the context of the messaging client104, and played with other users of the messaging system100. The messaging system100further enables a particular user to invite other users to participate in the play of a specific game, by issuing invitations to such other users from the messaging client104. The messaging client104also supports both the voice and text messaging (e.g., chats) within the context of gameplay, provides a leaderboard for the games, and also supports the provision of in-game rewards (e.g., coins and items). The visual identifier system124uses visual identifiers to launch a computer application. For example, the visual identifier system performs a visual search on a visual identifier to launch a computer application. In some examples the visual identifier system124operates within the context of the messaging client104. In some examples, the visual identifier system124may be supported by the application servers112. Data Architecture FIG.3is a schematic diagram illustrating data structures300, which may be stored in the database120of the messaging server system108, according to certain examples. While the content of the database120is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures (e.g., as an object-oriented database). The database120includes message data stored within a message table302. This message data includes, for any particular one message, at least message sender data, message recipient (or receiver) data, and a payload. Further details regarding information that may be included in a message, and included within the message data stored in the message table302is described below with reference toFIG.4. An entity table304stores entity data, and is linked (e.g., referentially) to an entity graph306and profile data308. Entities for which records are maintained within the entity table304may include individuals, corporate entities, organizations, objects, places, events, and so forth. Regardless of entity type, any entity regarding which the messaging server system108stores data may be a recognized entity. Each entity is provided with a unique identifier, as well as an entity type identifier (not shown). The entity graph306stores information regarding relationships and associations between entities. Such relationships may be social, professional (e.g., work at a common corporation or organization) interested-based or activity-based, merely for example. The profile data308stores multiple types of profile data about a particular entity. The profile data308may be selectively used and presented to other users of the messaging system100, based on privacy settings specified by a particular entity. Where the entity is an individual, the profile data308includes, for example, a username, telephone number, address, settings (e.g., notification and privacy settings), as well as a user-selected avatar representation (or collection of such avatar representations). A particular user may then selectively include one or more of these avatar representations within the content of messages communicated via the messaging system100, and on map interfaces displayed by messaging clients104to other users. The collection of avatar representations may include “status avatars,” which present a graphical representation of a status or activity that the user may select to communicate at a particular time. Where the entity is a group, the profile data308for the group may similarly include one or more avatar representations associated with the group, in addition to the group name, members, and various settings (e.g., notifications) for the relevant group. The database120also stores augmentation data, such as overlays or filters, in an augmentation table310. The augmentation data is associated with and applied to videos (for which data is stored in a video table314) and images (for which data is stored in an image table318). Filters, in one example, are overlays that are displayed as overlaid on an image or video during presentation to a recipient user. Filters may be of various types, including user-selected filters from a set of filters presented to a sending user by the messaging client104when the sending user is composing a message. Other types of filters include geolocation filters (also known as geo-filters), which may be presented to a sending user based on geographic location. For example, geolocation filters specific to a neighborhood or special location may be presented within a user interface by the messaging client104, based on geolocation information determined by a Global Positioning System (GPS) unit of the client device102. Another type of filter is a data filter, which may be selectively presented to a sending user by the messaging client104, based on other inputs or information gathered by the client device102during the message creation process. Examples of data filters include current temperature at a specific location, a current speed at which a sending user is traveling, battery life for a client device102, or the current time. Other augmentation data that may be stored within the image table318includes augmented reality content items (e.g., corresponding to applying Lenses or augmented reality experiences). An augmented reality content item may be a real-time special effect and sound that may be added to an image or a video. As described above, augmentation data includes augmented reality content items, overlays, image transformations, AR images, and similar terms refer to modifications that may be applied to image data (e.g., videos or images). This includes real-time modifications, which modify an image as it is captured using device sensors (e.g., one or multiple cameras) of a client device102and then displayed on a screen of the client device102with the modifications. This also includes modifications to stored content, such as video clips in a gallery that may be modified. For example, in a client device102with access to multiple augmented reality content items, a user can use a single video clip with multiple augmented reality content items to see how the different augmented reality content items will modify the stored clip. For example, multiple augmented reality content items that apply different pseudorandom movement models can be applied to the same content by selecting different augmented reality content items for the content. Similarly, real-time video capture may be used with an illustrated modification to show how video images currently being captured by sensors of a client device102would modify the captured data. Such data may simply be displayed on the screen and not stored in memory, or the content captured by the device sensors may be recorded and stored in memory with or without the modifications (or both). In some systems, a preview feature can show how different augmented reality content items will look within different windows in a display at the same time. This can, for example, enable multiple windows with different pseudorandom animations to be viewed on a display at the same time. Data and various systems using augmented reality content items or other such transform systems to modify content using this data can thus involve detection of objects (e.g., faces, hands, bodies, cats, dogs, surfaces, objects, etc.), tracking of such objects as they leave, enter, and move around the field of view in video frames, and the modification or transformation of such objects as they are tracked. In various examples, different methods for achieving such transformations may be used. Some examples may involve generating a three-dimensional mesh model of the object or objects, and using transformations and animated textures of the model within the video to achieve the transformation. In other examples, tracking of points on an object may be used to place an image or texture (which may be two dimensional or three dimensional) at the tracked position. In still further examples, neural network analysis of video frames may be used to place images, models, or textures in content (e.g., images or frames of video). Augmented reality content items thus refer both to the images, models, and textures used to create transformations in content, as well as to additional modeling and analysis information needed to achieve such transformations with object detection, tracking, and placement. Real-time video processing can be performed with any kind of video data (e.g., video streams, video files, etc.) saved in a memory of a computerized system of any kind. For example, a user can load video files and save them in a memory of a device or can generate a video stream using sensors of the device. Additionally, any objects can be processed using a computer animation model, such as a human's face and parts of a human body, animals, or non-living things such as chairs, cars, or other objects. In some examples, when a particular modification is selected along with content to be transformed, elements to be transformed are identified by the computing device, and then detected and tracked if they are present in the frames of the video. The elements of the object are modified according to the request for modification, thus transforming the frames of the video stream. Transformation of frames of a video stream can be performed by different methods for different kinds of transformation. For example, for transformations of frames mostly referring to changing forms of object's elements characteristic points for each element of an object are calculated (e.g., using an Active Shape Model (ASM) or other known methods). Then, a mesh based on the characteristic points is generated for each of the at least one element of the object. This mesh used in the following stage of tracking the elements of the object in the video stream. In the process of tracking, the mentioned mesh for each element is aligned with a position of each element. Then, additional points are generated on the mesh. A first set of first points is generated for each element based on a request for modification, and a set of second points is generated for each element based on the set of first points and the request for modification. Then, the frames of the video stream can be transformed by modifying the elements of the object on the basis of the sets of first and second points and the mesh. In such method, a background of the modified object can be changed or distorted as well by tracking and modifying the background. In some examples, transformations changing some areas of an object using its elements can be performed by calculating characteristic points for each element of an object and generating a mesh based on the calculated characteristic points. Points are generated on the mesh, and then various areas based on the points are generated. The elements of the object are then tracked by aligning the area for each element with a position for each of the at least one element, and properties of the areas can be modified based on the request for modification, thus transforming the frames of the video stream. Depending on the specific request for modification properties of the mentioned areas can be transformed in different ways. Such modifications may involve changing color of areas; removing at least some part of areas from the frames of the video stream; including one or more new objects into areas which are based on a request for modification; and modifying or distorting the elements of an area or object. In various examples, any combination of such modifications or other similar modifications may be used. For certain models to be animated, some characteristic points can be selected as control points to be used in determining the entire state-space of options for the model animation. In some examples of a computer animation model to transform image data using face detection, the face is detected on an image with use of a specific face detection algorithm (e.g., Viola-Jones). Then, an Active Shape Model (ASM) algorithm is applied to the face region of an image to detect facial feature reference points. In other examples, other methods and algorithms suitable for face detection can be used. For example, in some examples, features are located using a landmark, which represents a distinguishable point present in most of the images under consideration. For facial landmarks, for example, the location of the left eye pupil may be used. If an initial landmark is not identifiable (e.g., if a person has an eyepatch), secondary landmarks may be used. Such landmark identification procedures may be used for any such objects. In some examples, a set of landmarks forms a shape. Shapes can be represented as vectors using the coordinates of the points in the shape. One shape is aligned to another with a similarity transform (allowing translation, scaling, and rotation) that minimizes the average Euclidean distance between shape points. The mean shape is the mean of the aligned training shapes. In some examples, a search for landmarks from the mean shape aligned to the position and size of the face determined by a global face detector is started. Such a search then repeats the steps of suggesting a tentative shape by adjusting the locations of shape points by template matching of the image texture around each point and then conforming the tentative shape to a global shape model until convergence occurs. In some systems, individual template matches are unreliable, and the shape model pools the results of the weak template matches to form a stronger overall classifier. The entire search is repeated at each level in an image pyramid, from coarse to fine resolution. A transformation system can capture an image or video stream on a client device (e.g., the client device102) and perform complex image manipulations locally on the client device102while maintaining a suitable user experience, computation time, and power consumption. The complex image manipulations may include size and shape changes, emotion transfers (e.g., changing a face from a frown to a smile), state transfers (e.g., aging a subject, reducing apparent age, changing gender), style transfers, graphical element application, and any other suitable image or video manipulation implemented by a convolutional neural network that has been configured to execute efficiently on the client device102. In some examples, a computer animation model to transform image data can be used by a system where a user may capture an image or video stream of the user (e.g., a selfie) using a client device102having a neural network operating as part of a messaging client application104operating on the client device102. The transformation system operating within the messaging client104determines the presence of a face within the image or video stream and provides modification icons associated with a computer animation model to transform image data, or the computer animation model can be present as associated with an interface described herein. The modification icons include changes that may be the basis for modifying the user's face within the image or video stream as part of the modification operation. Once a modification icon is selected, the transform system initiates a process to convert the image of the user to reflect the selected modification icon (e.g., generate a smiling face on the user). A modified image or video stream may be presented in a graphical user interface displayed on the client device102as soon as the image or video stream is captured, and a specified modification is selected. The transformation system may implement a complex convolutional neural network on a portion of the image or video stream to generate and apply the selected modification. That is, the user may capture the image or video stream and be presented with a modified result in real-time or near real-time once a modification icon has been selected. Further, the modification may be persistent while the video stream is being captured, and the selected modification icon remains toggled. Machine taught neural networks may be used to enable such modifications. The graphical user interface, presenting the modification performed by the transform system, may supply the user with additional interaction options. Such options may be based on the interface used to initiate the content capture and selection of a particular computer animation model (e.g., initiation from a content creator user interface). In various examples, a modification may be persistent after an initial selection of a modification icon. The user may toggle the modification on or off by tapping or otherwise selecting the face being modified by the transformation system and store it for later viewing or browse to other areas of the imaging application. Where multiple faces are modified by the transformation system, the user may toggle the modification on or off globally by tapping or selecting a single face modified and displayed within a graphical user interface. In some examples, individual faces, among a group of multiple faces, may be individually modified, or such modifications may be individually toggled by tapping or selecting the individual face or a series of individual faces displayed within the graphical user interface. A story table312stores data regarding collections of messages and associated image, video, or audio data, which are compiled into a collection (e.g., a story or a gallery). The creation of a particular collection may be initiated by a particular user (e.g., each user for which a record is maintained in the entity table304). A user may create a “personal story” in the form of a collection of content that has been created and sent/broadcast by that user. To this end, the user interface of the messaging client104may include an icon that is user-selectable to enable a sending user to add specific content to his or her personal story. A collection may also constitute a “live story,” which is a collection of content from multiple users that is created manually, automatically, or using a combination of manual and automatic techniques. For example, a “live story” may constitute a curated stream of user-submitted content from varies locations and events. Users whose client devices have location services enabled and are at a common location event at a particular time may, for example, be presented with an option, via a user interface of the messaging client104, to contribute content to a particular live story. The live story may be identified to the user by the messaging client104, based on his or her location. The end result is a “live story” told from a community perspective. A further type of content collection is known as a “location story,” which enables a user whose client device102is located within a specific geographic location (e.g., on a college or university campus) to contribute to a particular collection. In some examples, a contribution to a location story may require a second degree of authentication to verify that the end user belongs to a specific organization or other entity (e.g., is a student on the university campus). As mentioned above, the video table314stores video data that, in one example, is associated with messages for which records are maintained within the message table302. Similarly, the image table318stores image data associated with messages for which message data is stored in the entity table304. The entity table304may associate various augmentations from the augmentation table310with various images and videos stored in the image table318and the video table314. The database120can also store visual identifiers in the visual identifier table316. Data Communications Architecture FIG.4is a schematic diagram illustrating a structure of a message400, according to some examples, generated by a messaging client104for communication to a further messaging client104or the messaging server114. The content of a particular message400is used to populate the message table302stored within the database120, accessible by the messaging server114. Similarly, the content of a message400is stored in memory as “in-transit” or “in-flight” data of the client device102or the application servers212. A message400is shown to include the following example components:message identifier402: a unique identifier that identifies the message400.message text payload404: text, to be generated by a user via a user interface of the client device102, and that is included in the message400.message image payload406: image data, captured by a camera component of a client device102or retrieved from a memory component of a client device102, and that is included in the message400. Image data for a sent or received message400may be stored in the image table318.message video payload408: video data, captured by a camera component or retrieved from a memory component of the client device102, and that is included in the message400. Video data for a sent or received message400may be stored in the video table314.message audio payload410: audio data, captured by a microphone or retrieved from a memory component of the client device102, and that is included in the message400.message augmentation data412: augmentation data (e.g., filters, stickers, or other annotations or enhancements) that represents augmentations to be applied to message image payload406, message video payload408, or message audio payload410of the message400. Augmentation data for a sent or received message400may be stored in the augmentation table310.message duration parameter414: parameter value indicating, in seconds, the amount of time for which content of the message (e.g., the message image payload406, message video payload408, message audio payload410) is to be presented or made accessible to a user via the messaging client104.message geolocation parameter416: geolocation data (e.g., latitudinal and longitudinal coordinates) associated with the content payload of the message. Multiple message geolocation parameter416values may be included in the payload, each of these parameter values being associated with respect to content items included in the content (e.g., a specific image into within the message image payload406, or a specific video in the message video payload408).message story identifier418: identifier values identifying one or more content collections (e.g., “stories” identified in the story table312) with which a particular content item in the message image payload406of the message400is associated. For example, multiple images within the message image payload406may each be associated with multiple content collections using identifier values.message tag420: each message400may be tagged with multiple tags, each of which is indicative of the subject matter of content included in the message payload. For example, where a particular image included in the message image payload406depicts an animal (e.g., a lion), a tag value may be included within the message tag420that is indicative of the relevant animal. Tag values may be generated manually, based on user input, or may be automatically generated using, for example, image recognition.message sender identifier422: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the Client device102on which the message400was generated and from which the message400was sent.message receiver identifier424: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the client device102to which the message400is addressed. The contents (e.g., values) of the various components of message400may be pointers to locations in tables within which content data values are stored. For example, an image value in the message image payload406may be a pointer to (or address of) a location within an image table318. Similarly, values within the message video payload408may point to data stored within a video table314, values stored within the message augmentations412may point to data stored in an augmentation table310, values stored within the message story identifier418may point to data stored in a story table312, and values stored within the message sender identifier422and the message receiver identifier424may point to user records stored within an entity table304. Time-Based Access Limitation Architecture FIG.5is a schematic diagram illustrating an access-limiting process500, in terms of which access to content (e.g., an ephemeral message502, and associated multimedia payload of data) or a content collection (e.g., an ephemeral message group504) may be time-limited (e.g., made ephemeral). An ephemeral message502is shown to be associated with a message duration parameter506, the value of which determines an amount of time that the ephemeral message502will be displayed to a receiving user of the ephemeral message502by the messaging client104. In one example, an ephemeral message502is viewable by a receiving user for up to a maximum of 10 seconds, depending on the amount of time that the sending user specifies using the message duration parameter506. The message duration parameter506and the message receiver identifier424are shown to be inputs to a message timer512, which is responsible for determining the amount of time that the ephemeral message502is shown to a particular receiving user identified by the message receiver identifier424. In particular, the ephemeral message502will only be shown to the relevant receiving user for a time period determined by the value of the message duration parameter506. The message timer512is shown to provide output to a more generalized ephemeral timer system202, which is responsible for the overall timing of display of content (e.g., an ephemeral message502) to a receiving user. The ephemeral message502is shown inFIG.5to be included within an ephemeral message group504(e.g., a collection of messages in a personal story, or an event story). The ephemeral message group504has an associated group duration parameter508, a value of which determines a time duration for which the ephemeral message group504is presented and accessible to users of the messaging system100. The group duration parameter508, for example, may be the duration of a music concert, where the ephemeral message group504is a collection of content pertaining to that concert. Alternatively, a user (either the owning user or a curator user) may specify the value for the group duration parameter508when performing the setup and creation of the ephemeral message group504. Additionally, each ephemeral message502within the ephemeral message group504has an associated group participation parameter510, a value of which determines the duration of time for which the ephemeral message502will be accessible within the context of the ephemeral message group504. Accordingly, a particular ephemeral message group504may “expire” and become inaccessible within the context of the ephemeral message group504, prior to the ephemeral message group504itself expiring in terms of the group duration parameter508. The group duration parameter508, group participation parameter510, and message receiver identifier424each provide input to a group timer514, which operationally determines, firstly, whether a particular ephemeral message502of the ephemeral message group504will be displayed to a particular receiving user and, if so, for how long. Note that the ephemeral message group504is also aware of the identity of the particular receiving user as a result of the message receiver identifier424. Accordingly, the group timer514operationally controls the overall lifespan of an associated ephemeral message group504, as well as an individual ephemeral message502included in the ephemeral message group504. In one example, each and every ephemeral message502within the ephemeral message group504remains viewable and accessible for a time period specified by the group duration parameter508. In a further example, a certain ephemeral message502may expire, within the context of ephemeral message group504, based on a group participation parameter510. Note that a message duration parameter506may still determine the duration of time for which a particular ephemeral message502is displayed to a receiving user, even within the context of the ephemeral message group504. Accordingly, the message duration parameter506determines the duration of time that a particular ephemeral message502is displayed to a receiving user, regardless of whether the receiving user is viewing that ephemeral message502inside or outside the context of an ephemeral message group504. The ephemeral timer system202may furthermore operationally remove a particular ephemeral message502from the ephemeral message group504based on a determination that it has exceeded an associated group participation parameter510. For example, when a sending user has established a group participation parameter510of 24 hours from posting, the ephemeral timer system202will remove the relevant ephemeral message502from the ephemeral message group504after the specified 24 hours. The ephemeral timer system202also operates to remove an ephemeral message group504when either the group participation parameter510for each and every ephemeral message502within the ephemeral message group504has expired, or when the ephemeral message group504itself has expired in terms of the group duration parameter508. In certain use cases, a creator of a particular ephemeral message group504may specify an indefinite group duration parameter508. In this case, the expiration of the group participation parameter510for the last remaining ephemeral message502within the ephemeral message group504will determine when the ephemeral message group504itself expires. In this case, a new ephemeral message502, added to the ephemeral message group504, with a new group participation parameter510, effectively extends the life of an ephemeral message group504to equal the value of the group participation parameter510. Responsive to the ephemeral timer system202determining that an ephemeral message group504has expired (e.g., is no longer accessible), the ephemeral timer system202communicates with the messaging system100(and, for example, specifically the messaging client104) to cause an indicium (e.g., an icon) associated with the relevant ephemeral message group504to no longer be displayed within a user interface of the messaging client104. Similarly, when the ephemeral timer system202determines that the message duration parameter506for a particular ephemeral message502has expired, the ephemeral timer system202causes the messaging client104to no longer display an indicium (e.g., an icon or textual identification) associated with the ephemeral message502. FIG.6is a flowchart of an example method600for using a visual identifier to launch a computer application, according to some examples. Although the described flowcharts can show operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a procedure, an algorithm, etc. The operations of methods may be performed in whole or in part, may be performed in conjunction with some or all of the operations in other methods, and may be performed by any number of different systems, such as the systems described herein, or any portion thereof, such as a processor included in any of the systems. In operation602, the visual identifier system124accesses, using one or more processors, a visual identifier, the visual identifier comprising encoded data representing a computer application. In operation604, the visual identifier system124performs a visual search of the visual identifier. In operation606, in response to performing the visual search of the visual identifier, the visual identifier system124causes presentation of an application menu within a graphical user interface of a computing device. In operation608, the visual identifier system124receives a selection of a first user interface element within the application menu. In operation610, in response to receiving the selection, the visual identifier system124runs the computer application. FIG.7is a diagrammatic illustration of a visual identifier702according to some examples. The visual identifier702may be a QR code. The visual identifier702may include an icon704. The icon704may be a visual indication of the computer application that the visual identifier702represents. In some examples, the sizing, placement, and padding of the icon704within the visual identifier702is predefined. FIG.8s an illustration of a user interface800within a messaging application, according to some examples. For example, the computer application may be a game application. The visual identifier system124may display the user interface800to allow a user to provide user input to consent to providing user data stored by the messaging application (e.g., messaging client104) to the game application. FIG.9is s an illustration of user interfaces900within a computer application, according to some examples. For example, the computer application may include restaurant information. The visual identifier system124may allow a user to share the computer application with other users via an ephemeral message in a messaging application. FIG.10is an illustration of user interfaces10000within a computer application, according to some examples. For example, the computer application may be a media overlay. A user may be able to customize the media overlay and share the customized media overlay (e.g., augmented reality content items) via an ephemeral message in a messaging application. Machine Architecture FIG.11is a diagrammatic representation of the machine1100within which instructions1108(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine1100to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions1108may cause the machine1100to execute any one or more of the methods described herein. The instructions1108transform the general, non-programmed machine1100into a particular machine1100programmed to carry out the described and illustrated functions in the manner described. The machine1100may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine1100may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1100may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smartphone, a mobile device, a wearable device (e.g., a smartwatch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1108, sequentially or otherwise, that specify actions to be taken by the machine1100. Further, while only a single machine1100is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions1108to perform any one or more of the methodologies discussed herein. The machine1100, for example, may comprise the client device102or any one of a number of server devices forming part of the messaging server system108. In some examples, the machine1100may also comprise both client and server systems, with certain operations of a particular method or algorithm being performed on the server-side and with certain operations of the particular method or algorithm being performed on the client-side. The machine1100may include processors1102, memory1104, and input/output I/O components638, which may be configured to communicate with each other via a bus1140. In an example, the processors1102(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) Processor, a Complex Instruction Set Computing (CISC) Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor1106and a processor1110that execute the instructions1108. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.11shows multiple processors1102, the machine1100may include a single processor with a single-core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory1104includes a main memory1112, a static memory1114, and a storage unit1116, both accessible to the processors1102via the bus1140. The main memory1104, the static memory1114, and storage unit1116store the instructions1108embodying any one or more of the methodologies or functions described herein. The instructions1108may also reside, completely or partially, within the main memory1112, within the static memory1114, within machine-readable medium1118within the storage unit1116, within at least one of the processors1102(e.g., within the Processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1100. The I/O components1138may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components1138that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components1138may include many other components that are not shown inFIG.11. In various examples, the I/O components1138may include user output components1124and user input components1126. The user output components1124may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The user input components1126may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further examples, the I/O components1138may include biometric components1128, motion components1130, environmental components1132, or position components1134, among a wide array of other components. For example, the biometric components1128include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye-tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components1130include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope). The environmental components1132include, for example, one or cameras (with still image/photograph and video capabilities), illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. With respect to cameras, the client device102may have a camera system comprising, for example, front cameras on a front surface of the client device102and rear cameras on a rear surface of the client device102. The front cameras may, for example, be used to capture still images and video of a user of the client device102(e.g., “selfies”), which may then be augmented with augmentation data (e.g., filters) described above. The rear cameras may, for example, be used to capture still images and videos in a more traditional camera mode, with these images similarly being augmented with augmentation data. In addition to front and rear cameras, the client device102may also include a 360° camera for capturing 360° photographs and videos. Further, the camera system of a client device102may include dual rear cameras (e.g., a primary camera as well as a depth-sensing camera), or even triple, quad or penta rear camera configurations on the front and rear sides of the client device102. These multiple cameras systems may include a wide camera, an ultra-wide camera, a telephoto camera, a macro camera and a depth sensor, for example. The position components1134include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components1138further include communication components1136operable to couple the machine1100to a network1120or devices1122via respective coupling or connections. For example, the communication components1136may include a network interface Component or another suitable device to interface with the network1120. In further examples, the communication components1136may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices1122may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components1136may detect identifiers or include components operable to detect identifiers. For example, the communication components1136may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components1136, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. The various memories (e.g., main memory1112, static memory1114, and memory of the processors1102) and storage unit1116may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions1108), when executed by processors1102, cause various operations to implement the disclosed examples. The instructions1108may be transmitted or received over the network1120, using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components1136) and using any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions1108may be transmitted or received using a transmission medium via a coupling (e.g., a peer-to-peer coupling) to the devices1122. Software Architecture FIG.12is a block diagram1200illustrating a software architecture1204, which can be installed on any one or more of the devices described herein. The software architecture1204is supported by hardware such as a machine1202that includes processors1220, memory1226, and I/O components1238. In this example, the software architecture1204can be conceptualized as a stack of layers, where each layer provides a particular functionality. The software architecture1204includes layers such as an operating system1212, libraries1210, frameworks1208, and applications1206. Operationally, the applications1206invoke API calls1250through the software stack and receive messages1252in response to the API calls1250. The operating system1212manages hardware resources and provides common services. The operating system1212includes, for example, a kernel1214, services1216, and drivers1222. The kernel1214acts as an abstraction layer between the hardware and the other software layers. For example, the kernel1214provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services1216can provide other common services for the other software layers. The drivers1222are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers1222can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., USB drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. The libraries1210provide a common low-level infrastructure used by the applications1206. The libraries1210can include system libraries1218(e.g., C standard library) that provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries1210can include API libraries1224such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries1210can also include a wide variety of other libraries1228to provide many other APIs to the applications1206. The frameworks1208provide a common high-level infrastructure that is used by the applications1206. For example, the frameworks1208provide various graphical user interface (GUI) functions, high-level resource management, and high-level location services. The frameworks1208can provide a broad spectrum of other APIs that can be used by the applications1206, some of which may be specific to a particular operating system or platform. In an example, the applications1206may include a home application1236, a contacts application1230, a browser application1232, a book reader application1234, a location application1242, a media application1244, a messaging application1246, a game application1248, and a broad assortment of other applications such as a third-party application1240. The applications1206are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications1206, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application1240(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application1240can invoke the API calls1250provided by the operating system1212to facilitate functionality described herein. Glossary “Carrier signal” refers to any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such instructions. Instructions may be transmitted or received over a network using a transmission medium via a network interface device. “Client device” refers to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDAs), smartphones, tablets, ultrabooks, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network. “Communication network” refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other types of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. “Component” refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various examples, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software), may be driven by cost and time considerations. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering examples in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In examples in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors1004or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some examples, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other examples, the processors or processor-implemented components may be distributed across a number of geographic locations. “Computer-readable storage medium” refers to both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. “Ephemeral message” refers to a message that is accessible for a time-limited duration. An ephemeral message may be a text, an image, a video and the like. The access time for the ephemeral message may be set by the message sender. Alternatively, the access time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory. “Machine storage medium” refers to a single or multiple storage devices and media (e.g., a centralized or distributed database, and associated caches and servers) that store executable instructions, routines and data. The term shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks The terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium.” “Non-transitory computer-readable storage medium” refers to a tangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine. “Signal medium” refers to any intangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine and includes digital or analog communications signals or other intangible media to facilitate communication of software or data. The term “signal medium” shall be taken to include any form of a modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure.
78,124
11857880
DETAILED DESCRIPTION OF THE DRAWINGS FIG.1is an illustration of a first arrangement of a sound sequence generated by the present audio system using a source clip selection system and a timeline renderer system, as follows: A number of different audio source clips10A,10B,10C . . .10N are first inputted into an audio master system20. Next, a transfer function35is applied to the plurality of audio source clips10A,10B,10C . . .10N to select audio segments of the plurality of audio source clips10A,10B,10C . . .10N. For example, first segment10A1may be selected from audio source clip10A and a second segment10N1may be selected from audio source clip10N. Both of these selected segments (10A1and10N1) can be operated on by transfer function35. Next, the timeline renderer system45applies a timeline rendering function to arrange the order of the selected audio segments10A1,10N1, etc. At this time, the selected audio segments are cross-faded as seen in Audio Timeline output50such that the transition from one selected segment to another (e.g.: segment A to segment B or segment B to segment C) is seamless and cannot be heard by the listener. The end result is that the present method of mixing audio segments from audio clips generates a unique stream of non-repeating sound which is then played back for the listener. (As illustrated, Segment A may correspond to audio source clip10N1, Segment B may correspond to audio source clip10A1, etc.) As can be appreciated, from a finite set of audio clips of finite length (i.e.:10A,10B, etc.), an infinite stream of non-repeating sound can be created (in Audio Timeline Output50). Although individual sounds can appear multiple times in the output, there will be no discernible repeating pattern over time in Audio Timeline Output50. As can be seen, the individual sound segments (10A1,10N1, a.k.a. Segment A, Segment B, Segment C, etc.) are taken from selected audio clips (10A to10N), and specifically from selected locations within the audio clips. In addition, the duration of the selected audio clips is preferably also selected by transfer function35. In various examples, the transfer function35selects audio segments of unequal lengths. In various examples, the transfer function system35randomly selects the audio segments, and/or randomly selects the lengths of the audio segments. In optional embodiments, the transfer function35may use a weighted function to select the audio segments. Alternatively, the transfer function35may use a heuristic function to select the audio segments. In preferred aspects, the transfer function35chooses the segments to achieve a desired level of uniqueness and consistency in sound playback. In optional embodiments, the duration of the cross-fades51and52between the audio clips is unequal. The duration of the cross-fades51and52between the audio clips can even be random. In various preferred aspects, the audio source clips are audio files or Internet URLs. In preferred aspects, the transfer function system35continues to select audio segments and the timeline renderer45continues to arrange the order of the selected audio segments as the audio playback clip is played. Stated another way, a unique audio stream50can be continuously generated at the same time that it is played back for the listener. As a result, the unique audio stream50need not “end”. Rather, new audio segments can be continuously added in new combinations to the playback sequence audio stream50while the user listens. As such, the playback length can be infinite. The present system has specific benefits in relaxation and meditation since the human brain is very adept at recognizing repeating sound patterns. When a static audio loop is played repetitiously, it becomes familiar and is recognized by the conscious mind. This disrupts relaxing, meditation or even playing a game. In contrast, the audio of the present system can be play endlessly without repeating patterns which allows the mind to relax and become immersed in the sound. Therefore, an advantage of the present system is that these large sound experiences can be produced from a much smaller number of audio clips and segments, thereby saving huge amounts of data storage space. With existing systems, very long sequences of audio must be captured without interruption. In contrast, with the present system, multiple, shorter audio clips can be used instead as input. This makes it much easier to capture sounds under non-ideal conditions. Since the present audio playback stream is formed from endless combinations of shorter audio segments played over randomly or in various sequences, the present unique audio stream will have a length greater than the duration of the audio source clips. In fact, the present unique audio playback clip may well have infinite length. FIG.2is an illustration of a second arrangement of a sound sequence generated by the present audio system using a source clip scheduling system and an audio track rendering system. In this embodiment, a plurality of audio master streams50A,50B,50C . . .50N, is again inputted into a sound experience system25(i.e.: “sound experience (input)”). Next, a scheduling function65is applied to the plurality of audio master streams to select playback times for the plurality of audio master streams50A,50B,50C . . .50N. Next, a track renderer75is applied to generate a plurality of audio playback clip tracks80A,80B,80C,80D, etc. Together, tracks80A to80N contain various combinations of scheduled discrete, semi-continuous, and continuous sounds that make up a “sonic experience” such as forest sounds (in this example two hawks, wind that comes and goes, and a continuously flowing creek). As such, audio master streams50A to50N are scheduled into a more layered experience of multiple sounds that occur over time, sometimes discretely (hawk cry) or continuously (creek), or a combination of both (wind that comes and goes). Scheduling function system65and track renderer75selectively fade tracks80A to80N in and out at different times. Accordingly, the listener hears a unique sound stream. In addition, experience parameters30determine various aspects of the scheduled output, including how many tracks80are outputted, and which tracks are outputted. In addition, experience parameters30determine how often discrete sounds are scheduled to play (for example, how often the Hawks cry from the example inFIG.2,80A and80B), the relative volume of each sound, and other aspects. The experience parameter system25determine how often discrete sounds play, how often semi-discrete sounds fade out and for how long they are faded out and for how long they play. In many ways, the system ofFIG.2builds upon the previously discussed system ofFIG.1. For example, the sound segments (variously labelled A, B, C, D) that make up the individual tracks80A,80B,80C and80D are composed of the selections made by the Transfer Function35and Timeline Renderer45from the system ofFIG.1. Optionally, in the aspect of the invention illustrated inFIG.2, a user input system100can also be included. The user input system100controls the scheduling function system65such that a user can vary or modify the selection frequency of any of the audio master streams50A,50B . . .50N. For example, Master Audio stream50B can be a “Hawk Cry”. Should the listener not wish to hear the sound of a hawk cry during the sound playback, the user can use the input control system to simply turn off or suspend the sound of the hawk cry (or make it occur less frequently), as desired. In this example, the user's control over the sound selection frequency forms part of the user's experience. The user is, in essence, building their own sound scape or listening environment. The very act of the user controlling the sounds can itself form part of a meditative or relaxation technique. As such, the user input system100optionally modifies or overrides the experience parameters system30that govern scheduling function65and track renderer75. As illustrated inFIG.2, the listener hears an audio track80that combines two Hawks (80A and80B), the Wind (80C) and the sound of a Creek (80A). As can be seen, the sound of the Creek is continuous in audio track80D (with cross-fades93,94and95) between its various shorter sound segments A, B, C and D. The sound of the Wind (audio track80C) is semi-continuous (as it would be in nature). The sounds of the hawk(s) (audio track80B) are much more intermittent or discreet and may be sound segments that are faded in and out. In the semi-continuous or continuous mode, each potentially infinite audio master clip preferably plays continuously or semi-continuously. In optional aspects, the scheduling function65randomly or heuristically selects playback times for the plurality of audio master streams50A,50B . . . etc. The tracks are assembled in time to produce the unique audio stream. Similar to the system inFIG.1, the scheduling function system65continues to select playback times for the plurality of audio master streams50A,50B . . .50N and the track renderer75continues to generate a plurality of audio playback clip tracks (80A,80B,80C and80D) as the audio playback clip track80is played. As such, the audio playback clip track80has the unique audio stream that may be of infinite length. FIG.3is a third embodiment of the present system, as follows: In this embodiment, a plurality of audio playback tracks80A,80B,80C . . .80N are inputted into an audio experiences system28(i.e.: “sound experiences (input)”). Next, a mixing function110is applied to the plurality of audio tracks80A,80B,80C . . .80N to select playback conditions for the plurality of audio tracks. A mixing renderer120is then applied to generate an audio playback clip130corresponding to the selected playback conditions. Similar to the systems inFIGS.1and2, the selected audio segments130A,130B and130C (up to130N) can be cross-faded. The final result is an audio playback clip track130having a unique sound stream that corresponds to the selected playback conditions which is then played back. A plurality of Experiences (tracks80A to80N) are used as the input to the Mixing Function110and Mixing Renderer120to create “atmospheric ambience” that changes randomly, heuristically, or by optional External Input control system115. In the example ofFIG.3, the External Input115comes from the actions in a video game where the player is wandering through a Forest Experience, then into a Swamp Experience, and finally ends up at the Beach Experience. Specifically, when the player is initially in a forest, they will hear forest sounds. As the player moves out of the forest and through a swamp, they will hear less forest sounds and more swamp sounds. Finally, as the player leaves the swamp and emerges at a beach, the swamp sounds fade away and the sounds of the waves and wind at the beach become louder. In this example, the atmospheric ambience changes as the user wanders, matching the user's location within the game world and seamlessly blending between the experiences as the user wanders. In this example, the audio playback clip track comprises audio segments with sounds that correspond to the position of the game player in the virtual world. The optional external input115could just as easily be driven by the time of day, the user's own heartbeat, or other metrics that change the ambience in a way that is intended to induce an atmosphere, feeling, relaxation, excitement, etc. It is to be understood that the input into external input115is not limited to a game. The present system can also be used to prepare and export foley tracks for use in games and films and the present system logic may also be incorporated into games and other software packages to generate unique sound atmospheres, or that respond to live dynamic input creating ambient effects that correspond to real or simulated events, or that create entirely artistic renditions.
11,961
11857881
DETAILED DESCRIPTION As utilized herein the terms “circuits” and “circuitry” refer to physical electronic components (i.e. hardware) and any software and/or firmware (“code”) which may configure the hardware, be executed by the hardware, and or otherwise be associated with the hardware. As used herein, for example, a particular processor and memory may comprise a first “circuit” when executing a first one or more lines of code and may comprise a second “circuit” when executing a second one or more lines of code. As utilized herein, “and/or” means any one or more of the items in the list joined by “and/or”. As an example, “x and/or y” means any element of the three-element set {(x), (y), (x, y)}. As another example, “x, y, and/or z” means any element of the seven-element set {(x), (y), (z), (x, y), (x, z), (y, z), (x, y, z)}. As utilized herein, the terms “e.g.,” and “for example” set off lists of one or more non-limiting examples, instances, or illustrations. As utilized herein, circuitry is “operable” to perform a function whenever the circuitry comprises the necessary hardware and code (if any is necessary) to perform the function, regardless of whether performance of the function is disabled, or not enabled, by some user-configurable setting. Referring toFIG.1, there is shown game console176which may be, for example, a Windows computing device, a UNIX computing device, a Linux computing device, an Apple OSX computing device, an Apple iOS computing device, an Android computing device, a Microsoft Xbox, a Sony Playstation, a Nintendo Wii, or the like. The example game console176comprises a video interface124, radio126, data interface128, network interface130, video interface132, audio interface134, southbridge150, main system on chip (SoC)148, memory162, optical drive172, and storage device174. The SoC148comprises central processing unit (CPU)154, graphics processing unit (GPU)156, audio processing unit (APU)158, cache memory164, and memory management unit (MMU)166. The various components of the game console176are communicatively coupled through various busses/links112,138,140,142,144,146,152,136,160,168, and170. The southbridge150comprises circuitry that supports one or more data bus protocols such as High-Definition Multimedia Interface (HDMI), Universal Serial Bus (USB), Serial Advanced Technology Attachment 2 (SATA 2), embedded multimedia card interface (e.MMC), Peripheral Component Interconnect Express (PCIe), or the like. The southbridge150may receive audio and/or video from an external source via link112(e.g., HDMI), from the optical drive (e.g., Blu-Ray)172via link168(e.g., SATA 2), and/or from storage174(e.g., hard drive, FLASH memory, or the like) via link170(e.g., SATA 2 and/or e.MMC). Digital audio and/or video is output to the SoC148via link136(e.g., CEA-861-E compliant video and IEC61937compliant audio). The southbridge150exchanges data with radio126via link138(e.g., USB), with external devices via link140(e.g., USB), with the storage174via the link170, and with the SoC148via the link152(e.g., PCIe). The radio126comprises circuitry operable to communicate in accordance with one or more wireless standards such as the IEEE 802.11 family of standards, the Bluetooth family of standards, and/or the like. The network interface130may comprise circuitry operable to communicate in accordance with one or more wired standards and to convert between wired standards. For example, the network interface130may communicate with the SoC148via link142using a first standard (e.g., PCIe) and may communicate with the network106using a second standard (e.g., gigabit Ethernet). The video interface132may comprise circuitry operable to communicate video in accordance with one or more wired or wireless video transmission standards. For example, the video interface132may receive CEA-861-E compliant video data via link144and encapsulate/format/etc., the video data in accordance with an HDMI standard for output to the monitor108via an HDMI link120. The audio interface134may comprise circuitry operable to communicate audio in accordance with one or more wired or wireless audio transmission standards. For example, the audio interface134may receive CEA-861-E compliant video data via link144and encapsulate/format/etc. the video data in accordance with an HDMI standard for output to the monitor108via an HDMI link120. The central processing unit (CPU)154may comprise circuitry operable to execute instructions for controlling/coordinating the overall operation of the game console176. Such instructions may be part of an operating system of the console and/or part of one or more software applications running on the console. The graphics processing unit (GPU)156may comprise circuitry operable to perform graphics processing functions such as compression, decompression, encoding, decoding, 3D rendering, and/or the like. The audio processing unit (APU)158may comprise circuitry operable to perform audio processing functions such as volume/gain control, compression, decompression, encoding, decoding, surround-sound processing, and/or the like to output single channel or multi-channel (e.g., 2 channels for stereo or 5, 7, or more channels for surround sound) audio signals. The APU158comprises memory (e.g., volatile and/or non-volatile memory)159which stores parameter settings that affect processing of audio by the APU158. For example, the parameter settings may include a first audio gain/volume setting that determines, at least in part, a volume of game audio output by the console176and a second audio gain/volume setting that determines, at least in part, a volume of chat audio output by the console176. The parameter settings may be modified via a graphical user interface (GUI) of the console and/or via an application programming interface (API) provided by the console176. The cache memory164comprises high-speed memory (typically DRAM) for use by the CPU154, GPU156, and/or APU158. The memory162may comprise additional memory for use by the CPU154, GPU156, and/or APU158. The memory162, typically DRAM, may operate at a slower speed than the cache memory164but may also be less expensive than cache memory as well as operate at a higher-speed than the memory of the storage device174. The MMU166controls accesses by the CPU154, GPU156, and/or APU158to the memory162, the cache164, and/or the storage device174. InFIG.1A, the example game console176is communicatively coupled to a user interface device102, a user interface device104, a network106, a monitor108, and audio subsystem110. Each of the user interface devices102and104may comprise, for example, a game controller, a keyboard, a motion sensor/position tracker, or the like. The user interface device102communicates with the game console176wirelessly via link114(e.g., Wi-Fi Direct, Bluetooth, and/or the like). The user interface device102communicates with the game console176via the wired link140(e.g., USB or the like). The network160comprises a local area network and/or a wide area network. The game console176communicates with the network106via wired link118(e.g., Gigabit Ethernet). The monitor108may be, for example, a LCD, OLED, or PLASMA screen. The game console176sends video to the monitor108via link120(e.g., HDMI). The audio subsystem110may be, for example, a headset, a combination of headset and audio basestation, or a set of speakers and accompanying audio processing circuitry. The game console176sends audio to the subsystem110via link(s)122(e.g., S/PDIF for digital audio or “line out” for analog audio). Additional details of an example audio subsystem110are described below. FIG.1Bdepicts an example gaming audio subsystem comprising a headset and an audio basestation. Shown is a headset200and an audio basestation300. The headset200communicates with the basestation300via a link180and the basestation300communicates with the console176via a link122. The link122may be as described above. In an example implementation, the link180may be a proprietary wireless link operating in an unlicensed frequency band. The headset200may be as described below with reference toFIGS.2A-2C. The basestation300may be as described below with reference toFIGS.3A-3B. Referring toFIG.1C, again shown is the console176connected to a plurality of peripheral devices and a network106. The example peripheral devices shown include a monitor108, a user interface device102, a headset200, an audio basestation300, and a multi-purpose device192. The monitor108and user interface device102are as described above. An example implementation of the headset200is described below with reference toFIGS.2A-2C. An example implementation of the audio basestation is described below with reference toFIGS.3A-3B. The multi-purpose device192may be, for example, a tablet computer, a smartphone, a laptop computer, or the like and that runs an operating system such as Android, Linux, Windows, iOS, OSX, or the like. An example implementation of the multi-purpose device192is described below with reference toFIG.4. Hardware (e.g., a network adaptor) and software (i.e., the operating system and one or more applications loaded onto the device192) may configure the device192for operating as part of the GPN190. For example, an application running on the device192may cause display of a graphical user interface via which a user can access gaming-related data, commands, functions, parameter settings, etc. and via which the user can interact with the console176and the other devices of the GPN190to enhance his/her gaming experience. The peripheral devices102,108,192,200,300are in communication with one another via a plurality of wired and/or wireless links (represented visually by the placement of the devices in the cloud of GPN190). Each of the peripheral devices in the gaming peripheral network (GPN)190may communicate with one or more others of the peripheral devices in the GPN190in a single-hop or multi-hop fashion. For example, the headset200may communicate with the basestation300in a single hop (e.g., over a proprietary RF link) and with the device192in a single hop (e.g., over a Bluetooth or Wi-Fi direct link), while the tablet may communicate with the basestation300in two hops via the headset200. As another example, the user interface device102may communicate with the headset200in a single hop (e.g., over a Bluetooth or Wi-Fi direct link) and with the device192in a single hop (e.g., over a Bluetooth or Wi-Fi direct link), while the device192may communicate with the headset200in two hops via the user interface device102. These example interconnections among the peripheral devices of the GPN190are merely examples, any number and/or types of links among the devices of the GPN190is possible. The GPN190may communicate with the console176via any one or more of the connections114,140,122, and120described above. The GPN190may communicate with a network106via one or more links194each of which may be, for example, Wi-Fi, wired Ethernet, and/or the like. A database182which stores gaming audio data is accessible via the network106. The gaming audio data may comprise, for example, signatures of particular audio clips (e.g., individual sounds or collections or sequences of sounds) that are part of the game audio of particular games, of particular levels/scenarios of particular games, particular characters of particular games, etc. In an example implementation, the database182may comprise a plurality of records183, where each record183comprises an audio clip (or signature of the clip)184, a description of the clip184(e.g., the game it is from, when it occurs in the game, etc.), one or more gaming commands186associated with the clip, one or more parameter settings187associated with the clip, and/or other data associated with the audio clip. Records183of the database182may be downloadable to, or accessed in real-time by, one of more devices of the GPN190. Referring toFIGS.2A and2B, there is shown two views of an example headset200that may present audio output by a gaming console such as the console176. The headset200comprises a headband202, a microphone boom206with microphone204, ear cups208aand208bwhich surround speakers216aand216b, connector210, connector214, and user controls212. The connector210may be, for example, a 3.5 mm headphone socket for receiving analog audio signals (e.g., receiving chat audio via an Xbox “talkback” cable). The microphone204converts acoustic waves (e.g., the voice of the person wearing the headset) to electric signals for processing by circuitry of the headset and/or for output to a device (e.g., console176, basestation300, a smartphone, and/or the like) that is in communication with the headset. The speakers216aand216bconvert electrical signals to soundwaves. The user controls212may comprise dedicated and/or programmable buttons, switches, sliders, wheels, etc. for performing various functions. Example functions which the controls212may be configured to perform include: power the headset200on/off, mute/unmute the microphone204, control gain/volume of, and/or effects applied to, chat audio by the audio processing circuitry of the headset200, control gain/volume of, and/or effects applied to, game audio by the audio processing circuitry of the headset200, enable/disable/initiate pairing (e.g., via Bluetooth, Wi-Fi direct, or the like) with another computing device, and/or the like. The connector214may be, for example, a USB port. The connector214may be used for downloading data to the headset200from another computing device and/or uploading data from the headset200to another computing device. Such data may include, for example, parameter settings (described below). Additionally, or alternatively, the connector214may be used for communicating with another computing device such as a smartphone, tablet compute, laptop computer, or the like. FIG.2Cdepicts a block diagram of the example headset200. In addition to the connector210, user controls212, connector214, microphone204, and speakers216aand216balready discussed, shown are a radio220, a CPU222, a storage device224, a memory226, an audio processing circuit230, and a headset preset management component240. The radio220may comprise circuitry operable to communicate in accordance with one or more standardized (such as, for example, the IEEE 802.11 family of standards, the Bluetooth family of standards, and/or the like) and/or proprietary wireless protocol(s) (e.g., a proprietary protocol for receiving audio from an audio basestation such as the basestation300). The CPU222may comprise circuitry operable to execute instructions for controlling/coordinating the overall operation of the headset200. Such instructions may be part of an operating system or state machine of the headset200and/or part of one or more software applications running on the headset200. In some implementations, the CPU222may be, for example, a programmable interrupt controller, a state machine, or the like. The storage device224may comprise, for example, FLASH or other nonvolatile memory for storing data which may be used by the CPU222and/or the audio processing circuitry230. Such data may include, for example, parameter settings that affect processing of audio signals in the headset200and parameter settings that affect functions performed by the user controls212. For example, one or more parameter settings may determine, at least in part, a gain of one or more gain elements of the audio processing circuitry230. As another example, one or more parameter settings may determine, at least in part, a frequency response of one or more filters that operate on audio signals in the audio processing circuitry230. As another example, one or more parameter settings may determine, at least in part, whether and which sound effects are added to audio signals in the audio processing circuitry230(e.g., which effects to add to microphone audio to morph the user's voice). Example parameter settings which affect audio processing are described in the co-pending U.S. patent application Ser. No. 13/040,144 titled “Gaming Headset with Programmable Audio” and published as US2012/0014553, the entirety of which is hereby incorporated herein by reference. Particular parameter settings may be selected autonomously by the headset200in accordance with one or more algorithms, based on user input (e.g., via controls212), and/or based on input received via one or more of the connectors210and214. In some instances, sets of various parameter settings may be predefined for use in configuring headsets and/or controlling operations thereof. Such sets of various parameter settings are referenced in this application as “headset presets.” The memory226may comprise volatile memory used by the CPU230and/or audio processing circuit230as program memory, for storing runtime data, etc. The audio processing circuit230may comprise circuitry operable to perform audio processing functions such as volume/gain control, compression, decompression, encoding, decoding, introduction of audio effects (e.g., echo, phasing, virtual surround effect, etc.), and/or the like. As described above, the processing performed by the audio processing circuit230may be determined, at least in part, by which parameter settings have been selected. The processing may be performed on game, chat, and/or microphone audio that is subsequently output to speaker216aand216b. Additionally, or alternatively, the processing may be performed on chat audio that is subsequently output to the connector210and/or radio220. In an example implementation, the headset200may be configured to as networked gaming headset—i.e., to support network access and use thereof in conjunction with operation of the headset. For example, the headset200may be configurable to utilize network accessibility to store, share, and/or obtain information relating to use or operation of the headset200, particularly during multi-player online gaming. In one particular embodiment, configuring the headset200as a networked gaming headset may be done in conjunction with use of headset presets. In this regard, a headset ‘headset preset’ may comprise a set of values corresponding to one or more configurable parameter settings that are used by or applied to various components of the headset, such as components used in conjunction with audio processing (e.g., to enable adjusting audio characteristics) and/or components pertinent to operation of the headset200. For example, different headset presets may comprise values applicable to particular configurable parameter settings to produce different audio effects in audio inputs—e.g., audio corresponding to inputs via the microphone204of the headset200. The different configurable input-related parameter settings may be used to, for example, adaptively control how a player's voice—that is a voice of user of the headset200(e.g., using voice morphing techniques) may sound to listeners of the audio—e.g., other player(s) in multi-player online game. Different headset presets may also comprise values applicable to particular configurable parameter settings to produce different audio effects in audio outputs—e.g., outputs via the speakers216aand216bof the headset200. For example, the different configurable output-related parameter settings may be used to, for example, variably control equalizer settings, to control how voices of other players may sound. Also, different headset presets may comprise different headset operation-related parameter settings. For example, different headset presets may comprise values applicable to particular configurable parameter settings that may provide for different functionality of re-definable inputs (e.g., buttons or switches) on the headset200. To support configuring and/or operation of the headset200as a network gaming headset, dedicated components may be used and/or incorporated into the headset200and/or existing components may be modified or adjusted. The headset200may incorporate, for example, the headset preset management component240, which may comprise suitable circuitry for managing headset presets and use thereof in the headset200. For example, the headset preset management component240may be configured to manage generation, storage, sharing, and/or obtaining headset presets in the headset200. In another example implementation, the functions of the preset management circuitry240may be integrated into the other components (e.g., CPU222) of the headset200. In some instances, headset presets may be stored in and/or obtained from remote, dedicated resources. For example, a centralized headset preset depository may be utilized. Headset presets may be uploaded to or downloaded from the depository to enable sharing or exchanging (i.e. for value, as further explained below) of headset presets among players. Such use scenario is described in more detail with respect to, for example,FIG.5. FIG.3Adepicts two views of an example embodiment of the audio basestation300. The basestation300comprises status indicators302, user controls310, power port325, and audio connectors314,316,318, and320. The audio connectors314and316may comprise digital audio in and digital audio out (e.g., S/PDIF) connectors, respectively. The audio connectors318and320may comprise a left “line in” and a right “line in” connector, respectively. The controls310may comprise, for example, a power button, a button for enabling/disabling virtual surround sound, a button for adjusting the perceived angles of the speakers when the virtual surround sound is enabled, and a dial for controlling a volume/gain of the audio received via the “line in” connectors318and320. The status indicators302may indicate, for example, whether the audio basestation300is powered on, whether audio data is being received by the basestation300via connectors314, and/or what type of audio data (e.g., Dolby Digital) is being received by the basestation300. FIG.3Bdepicts a block diagram of the audio basestation300. In addition to the user controls310, indicators302, and connectors314,316,318, and320described above, the block diagram additionally shows a CPU322, a storage device324, a memory326, a radio319, an audio processing circuit330, and a radio332. The radio319comprises circuitry operable to communicate in accordance with one or more standardized (such as the IEEE 802.11 family of standards, the Bluetooth family of standards, and/or the like) and/or proprietary (e.g., proprietary protocol for receiving audio protocols for receiving audio from a console such as the console176) wireless protocols. The radio332comprises circuitry operable to communicate in accordance with one or more standardized (such as, for example, the IEEE 802.11 family of standards, the Bluetooth family of standards, and/or the like) and/or proprietary wireless protocol(s) (e.g., a proprietary protocol for transmitting audio to headphones200). The CPU322comprises circuitry operable to execute instructions for controlling/coordinating the overall operation of the audio basestation300. Such instructions may be part of an operating system or state machine of the audio basestation300and/or part of one or more software applications running on the audio basestation300. In some implementations, the CPU322may be, for example, a programmable interrupt controller, a state machine, or the like. The storage324may comprise, for example, FLASH or other nonvolatile memory for storing data which may be used by the CPU322and/or the audio processing circuitry330. Such data may include, for example, parameter settings that affect processing of audio signals in the basestation300. For example, one or more parameter settings may determine, at least in part, a gain of one or gain elements of the audio processing circuitry330. As another example, one or more parameter settings may determine, at least in part, a frequency response of one or more filters that operate on audio signals in the audio processing circuitry330. As another example, one or more parameter settings may determine, at least in part, whether and which sound effects are added to audio signals in the audio processing circuitry330(e.g., which effects to add to microphone audio to morph the user's voice). Example parameter settings which affect audio processing are described in the co-pending U.S. patent application Ser. No. 13/040,144 titled “Gaming Headset with Programmable Audio” and published as US2012/0014553, the entirety of which is hereby incorporated herein by reference. Particular parameter settings may be selected autonomously by the basestation300in accordance with one or more algorithms, based on user input (e.g., via controls310), and/or based on input received via one or more of the connectors314,316,318, and320. The memory326may comprise volatile memory used by the CPU322and/or audio processing circuit330as program memory, for storing runtime data, etc. The audio processing circuit330may comprise circuitry operable to perform audio processing functions such as volume/gain control, compression, decompression, encoding, decoding, introduction of audio effects (e.g., echo, phasing, virtual surround effect, etc.), and/or the like. As described above, the processing performed by the audio processing circuit330may be determined, at least in part, by which parameter settings have been selected. The processing may be performed on game and/or chat audio signals that are subsequently output to a device (e.g., headset200) in communication with the basestation300. Additionally, or alternatively, the processing may be performed on a microphone audio signal that is subsequently output to a device (e.g., console176) in communication with the basestation300. FIG.4depicts a block diagram of an example multi-purpose device192. The example multi-purpose device192comprises a an application processor402, memory subsystem404, a cellular/GPS networking subsystem406, sensors408, power management subsystem410, LAN subsystem412, bus adaptor414, user interface subsystem416, and audio processor418. The application processor402comprises circuitry operable to execute instructions for controlling/coordinating the overall operation of the multi-purpose device192as well as graphics processing functions of the multi-purpose device402. Such instructions may be part of an operating system of the multi-purpose device192and/or part of one or more software applications running on the multi-purpose device192. The memory subsystem404comprises volatile memory for storing runtime data, nonvolatile memory for mass storage and long-term storage, and/or a memory controller which controls reads writes to memory. The cellular/GPS networking subsystem406comprises circuitry operable to perform baseband processing and analog/RF processing for transmission and reception of cellular and GPS signals. The sensors408comprise, for example, a camera, a gyroscope, an accelerometer, a biometric sensor, and/or the like. The power management subsystem410comprises circuitry operable to manage distribution of power among the various components of the multi-purpose device192. The LAN subsystem412comprises circuitry operable to perform baseband processing and analog/RF processing for transmission and reception-of wired, optical, and/or wireless signals (e.g., in accordance Wi-Fi, Wi-Fi Direct, Bluetooth, Ethernet, and/or the other standards). The bus adaptor414comprises circuitry for interfacing one or more internal data busses of the multi-purpose device with an external bus (e.g., a Universal Serial Bus) for transferring data to/from the multi-purpose device via a wired connection. The user interface subsystem416comprises circuitry operable to control and relay signals to/from a touchscreen, hard buttons, and/or other input devices of the multi-purpose device192. The audio processor418comprises circuitry to process (e.g., digital to analog conversion, analog-to-digital conversion, compression, decompression, encryption, decryption, resampling, etc.) audio signals. The audio processor418may be operable to receive and/or output signals via a connector such as a 3.5 mm stereo and microphone connector. FIG.5depicts a block diagram illustrating use of networked gaming headsets, such as to generate, store, and/or obtain headset presets. Referring toFIG.5, there is shown headsets5001and5002, hosts5201and5202, social service530, and web service540. Each of the headsets5001and5002may be similar to the headset200, for example. In this regard, the headsets5001and5002may be utilized by users5101and5102, respectively, to facilitate outputting audio (e.g., via speakers of the headsets) to the users5101and5102(including performing necessary audio processing related thereto). Furthermore, the headsets5001and5002may also be utilized in capturing audio (e.g., via microphones) from the users5101and5102, and processing the audio input, and (in some instances) communicating the audio (e.g., to other users, such as during online gaming). The headsets5001and5002may be coupled to the hosts5201and5202, respectively (e.g., via connections5021and5022). In this regard, each of the hosts5201and5202may comprise suitable circuitry for supporting operation of headsets (e.g., the headsets5001and5002). For example, the hosts5201and5202may be configured for providing or supporting such functions as processing (audio and/or non-audio), storage, networking, and the like, which may be needed during operation of the headsets5001and5002. In an example embodiment, each of the hosts5201and5202may correspond to (at least a portion of) a combination of a game console (e.g., similar to the game console176) and a basestation (e.g., similar to the basestation300), with the connections5021and5022comprising wireless links (e.g., similar to the link180). The disclosure is not so limited, however, and in some instances a host520imay correspond to any suitable electronic device or system which may be configured to perform any of the operations or functions described with respect to the hosts5201and5202. In operation, the combinations of headset5001/host5201and headset5002/host5202may be operable to support processing and/or communication of audio associated with the two different users (players)5101and5102, such as during multi-player online gaming, for example. In some instances, the headsets5001and5002may be configured as networked gaming headsets. Networked gaming headsets may use their network connections to share and/or exchange (e.g., for value), directly and/or indirectly (e.g., via intermediary media or systems), data during (and/or relating to) online gaming. In the particular example use scenario shown inFIG.5, the headsets5001and5002may support generation and sharing of headset presets. As described in more detail with respect toFIG.2C, headset presets may comprise information relating to configuring one or more parameter settings that are used during processing or operation of headsets. Each combination of headset500i/host520iis an integer) may be used, for example, to generate headset presets. For example, the combination of headset5001/host5201may be utilized in generating a headset preset514, which may define values applicable to one or more configurable parameter settings used in the headset5001(e.g., parameter settings relating to input audio, output audio, and/or control of the audio or headset). In some instances, each headset preset may be assigned an identifier (e.g., token) which may be set such that to enable uniquely identifying the headset preset. The headset preset514may be stored, at least initially, locally—e.g., within the host5201and/or the headset5001. In networked gaming use scenarios, however, headset presets may be stored remotely and/or shared (or exchanged for value) among players. For example, a web-based service540may be utilized to provide an online, centralized depository of data, including headset presets. The web-based service540may be configured on a plurality of hardware resources (e.g., storage elements, processing elements, routing elements, etc.), using suitable software (and firmware) solutions, such as for managing operations of the web-based service540, and/or for controlling or supporting applications or functions associated with the web-based service540. The web-based service540may be configured to support such functions, for example, as remote storage of headset presets, and sharing or exchanging of these headset presets. For example, the web-based service540may comprise a headset preset database (DB)550, which may be used to store a plurality of present entries552. Accordingly, headset presets may be uploaded by players into the web-based service540, and stored therein. For example, once the headset preset514is generated, the headset preset514may associated with an identifier (e.g., based on player/user5101command or selection) for remote storage. Accordingly, the host5201may establish connection5421to the web-based service540, and may utilize that connection to upload the headset preset514and/or its identifier into the headset preset DB550. The players may then share or exchange headset presets, using the remotely stored copies thereof (in the web-based service540). The sharing or exchanging of the headset presets may be done by sharing or exchanging information that enable retrieving them from the online remote depository. For example, since each headset preset may be assigned a unique token, tokens may be shared between the players, with the token then being used for downloading the corresponding headset presets from the online central depository. The sharing of the tokens (or other information that may be used to facilitate accessing and retrieving of the headset presets) may be done over direct peer-to-peer connection. For example, hosts5201and5202may establish direct peer-to-peer connection522, which player5101may then use the direct peer-to-peer connection522to send player5102the token identifying the headset preset514. Alternatively, tokens (or similar information) may be shared using indirect connection. For example, existing web-based social services (an example of which, web-based social service530is shown), which may inherently offer measures for ensuring confidential and validated user-to-user connections, may be used. As shown inFIG.5, the hosts5201and5202may establish, respectively, connections5321and5322to the web-based social service530. Additionally, or alternatively, tokens that identify headset presets may be shared via email, SMS, MMS, or the like. The shared token (and/or other necessary information) may be provided to the web-based service540, which may then download the corresponding headset preset to the headset of the other player. For example, once the first player5101shares the token (and/or other necessary information) with the second player5102(directly, or via the web-based social service530), the second player5102may provide the token to the web-based service540. In this regard, the host5202may establish a connection5422to the web-based service540, which may be used in communicating the token to the web-based service540. The web-based service540may then download the corresponding headset preset to the headset5002of the second player5102. The web-based service540may be configured to validate players before allowing download of headset presets by the players. The validation of the players may simply be based on providing the correct tokens—i.e., possession or knowledge of tokens associated with headset presets may be interpreted as indication that the player is permitted to download the headset presets. In some instances, additional validation information may be required (e.g., user name, identifier, password, etc.). In some instances, headset presets (or information related thereto—e.g., tokens) are exchanged conditionally—e.g., for value. For example, headset presets may be traded among players in exchange for things of value in the online games, such as lives, points, tools, skills, virtual money and/or the like. In some instances, headset presets may even be exchanged for monitory compensation (e.g., pay or credit). Accordingly, sharing the headset presets (or information related thereto) may entail negotiating values for the offered headset preset(s). FIG.6Ais a flowchart illustrating an example process for generating and uploading headset presets in networked gaming headsets. Referring toFIG.6A, there is shown a flow chart600, comprising a plurality of example steps. In starting step602, a headset (e.g., the headset5001) may be powered on and/or set to an initial operating state, whereby the headset may be ready for outputting of audio (e.g., microphone audio) and/or handling of input audio (e.g., game and/or chat audio). In step604, a new headset preset may be generated, or an existing headset preset may be modified (e.g., based on user input or selections. In this regard, modifying existing headset presets may comprise retrieving previously stored copies thereof, for example, from a centralized preset depository (which may be managed via web service, such as the web-based service540). In step606, a token (or similar type of unique identifier) may be assigned to the headset preset (if not previously assigned—e.g., if the headset preset was not an existing one that was simply modified). In some instances, additional information may also be assigned to the headset preset (e.g., for use in validating users accessing them). In step608, the headset preset may be uploaded (in lieu of or in addition to storing it locally) to the centralized preset depository. In some instances the uploading may comprise performing user validation before storing new headset presets or overwriting existing copies thereof—e.g., to validate that the user has permission to access and/or use the centralized preset depository and/or the web-based service managing it. FIG.6Bis a flowchart illustrating an example process for obtaining and using headset presets in networked gaming headsets. Referring toFIG.6B, there is shown a flow chart630, comprising a plurality of example steps. In starting step602, a headset (e.g., the headset5002) may be powered on and/or set to an initial operating state, whereby the headset may be ready for outputting of audio (e.g., microphone audio) and/or handling of input audio (e.g., game and/or chat audio). In step634, the first player may request a headset preset from a centralized preset depository (which may be managed via web service, such as the web-based service540). The request may comprise, for example, providing a ‘token’ associated with the requested headset preset, which a second player—the one to whom the requested headset preset belongs—may have shared with the first player. In this regard, token (or information facilitating access to headset presets) may be shared or exchanged directly (using peer-to-peer connection, such as the connection522), or indirectly—e.g., via a commonly accessed service, such as the web-based social service530. In step636, a request validation may be performed (e.g., in the centralized preset depository, where the headset preset is stored). In an example implementation, the request validation may, at a minimum, comprise validating that the user provided the particular token (or other identifying information) associated with the requested headset preset. Further, the validation may also entail validating the requesting player. In instances where the request validation fails, the process may proceed to an exit state638. In this regard, the exit state638may comprise generating and communicating notification of rejected attempts to obtain the headset preset (e.g., notifications sent to the requesting player and/or the player to whom the headset preset belongs). Returning to step636, in instances where the request validation is successful, the process may proceed to step640. In step640, the player may download the headset preset from the centralized preset depository (e.g., into the headset used thereby). In step642, the headset used by the player may be configured based on the downloaded headset preset. Various embodiments of the invention may comprise a system and a method for networked gaming headsets. For example, in an audio setup (e.g., combination of headset5001/host5202) comprising at least one audio headset (e.g., the headset5001) which may be configurable to process audio for a first player (e.g., player5101) when participating in an online multiplayer game, a headset preset (e.g., headset preset514) may be configured. In this regard, a headset preset may comprise values for one or more configurable parameter settings relating to operation or functions of the headset500i. A token (or similar identifier) may be assigned to the headset preset. The headset preset may then be uploaded into a central headset preset depository (e.g., the preset DB550) which may be accessible by a plurality of players. The first player may then share the token, via a network connection, with a second player (e.g., player5102), who may be utilizing a second audio setup (e.g., combination of headset5002/host5202) comprising at least one other audio headset (e.g., the headset5002), which may be configurable to process audio for the second player. Access to the central headset preset depository may be managed via a web-accessed service (e.g., the web-based service540), which may support web-based user interactions for uploading and/or downloading headset presets. The second player may download the headset preset from the central headset preset depository into the other audio headset. The downloaded headset preset may then be utilized in configuring the other audio headset (used by the second player), for processing audio for the second player participating in the online multiplayer game. The downloading of the headset preset by the second player may only be allowed based on validation of the second player. The second player may be validated based on, for example, the token associated with the headset preset. The token may be shared by the first player with the second player using direct peer-to-peer connection, or via a web-accessed social service (e.g., the web-based social service530). The headset preset may be shared with the second player based on a negotiation for compensation for sharing of the headset preset. The compensation may be something of value in an online game, for example. The operation or function of the headset may control the sound of the first player's voice in an online chat. The operation or function of the headset may also be a multi-band equalizer. The headset preset may be associated with a particular video game such that the multi-band equalizer may be configured to enhance particular sounds of said particular game. The present method and/or system may be realized in hardware, software, or a combination of hardware and software. The present methods and/or systems may be realized in a centralized fashion in at least one computing system, or in a distributed fashion where different elements are spread across several interconnected computing systems. Any kind of computing system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computing system with a program or other code that, when being loaded and executed, controls the computing system such that it carries out the methods described herein. Another typical implementation may comprise an application specific integrated circuit or chip. Some implementations may comprise a non-transitory machine-readable (e.g., computer readable) medium (e.g., FLASH drive, optical disk, magnetic storage disk, or the like) having stored thereon one or more lines of code executable by a machine, thereby causing the machine to perform processes as described herein. While the present method and/or system has been described with reference to certain implementations, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present method and/or system. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present disclosure without departing from its scope. Therefore, it is intended that the present method and/or system not be limited to the particular implementations disclosed, but that the present method and/or system will include all implementations falling within the scope of the appended claims.
44,731
11857882
DESCRIPTION OF SPECIFIC EMBODIMENTS OF THE INVENTION The present invention, in some embodiments thereof, relates to altering a display of a computer game and, more specifically, but not exclusively, to altering a display of a computer game to remove matching separable ends of selectable objects in response to user input indicating of correct matches. According to some embodiments of the present invention, there are provided methods, systems and computer program products for altering a display of a computer game, in particular a matching computer game. A plurality of selectable objects, items and/or assets (collectively designated objects) of the computer game, for example, a tile (e.g., domino tile), a card, and/or the like may each have a plurality of separable ends which may be detached and removed from the respective selectable object. Each of the separable ends may be marked with one of a plurality of patterns. Users playing (players) the matching game using client devices (e.g. computers, tablets, Smartphones, etc.) are challenged to match between separable ends having similar and/or identical patterns. When the player indicates a correct match between similar patterns marked on the separable ends of two or more selectable objects, the display of the computer game may be altered to separate the matched separable ends, break them away from their (parent) selectable objects and remove them. Moreover, the selectable objects of the computer game may be stacked in a plurality of stacks each comprising multiple selectable objects stacked one on top the other such that only the patterns marked on the separable ends of a top most selectable object in each stack may be visible while the top most selectable object conceals the patterns marked on the separable ends of lower layer selectable objects of the stack such that these patterns are invisible to the player. As such, the player may match between visible patterns which are marked on the separable ends of the top most selectable objects in the stacks. However, when the display is altered to remove correctly matched separable ends which are obviously those of the top most selectable objects, the next lower layer selectable objects become the top most selectable objects and the pattern marked on their separable ends be revealed and visible to the player who may now select the newly revealed separate ends for matching. Altering the display of the computer game to remove correctly matched separable ends may present significant benefits and advantages. Even computer games which are initially highly attractive to players as they offer substantial fun and/or challenge, the appeal of a computer game may gradually diminish as players become familiar with its features, elements, details and/or the like which may eventually lead to loss of interest of the players in the computer game. Therefore, altering the display of the computer game to remove correctly matched separable ends may significantly improve the technology of computer games by providing a dynamic game scene which may add challenge to the computer game thus increasing interest, attraction, and/or enthusiasm of the player in the computer game which may also increase player retention. Furthermore, altering the display of the computer game to remove the correctly matched separable ends may further improve the technology of computer games since the display may be altered to remove the matched separable ends in a very graceful and appealing manner which may significantly improve user experience of the player. Arranging the selectable objects in stacks such that removal of correctly matched separable ends of the top most selectable objects in each stack reveals the patterns marked on the next lower layer selectable objects of the stack may make the computer game even more dynamic and challenging which in turn may further increase interest, attraction, and/or enthusiasm of the player in the computer game. According to some embodiments of the present invention the display of the computer game may be altered to lock one or more of the separable ends of one or more of the selectable objects thus prohibiting the locked separable ends for matching and limiting the matching options for the player. To this end, the display may be altered to associate one or more selected separable ends of one or more selected selectable objects of the computer game with a lock mark which prevents the user from selecting these separable ends for matching with separable ends of other selectable objects marked with similar and/or identical patterns. Moreover, the display may be periodically altered, for example, every time period, every number of moves (turns) of the player and/or the like, to reselect separable ends of the same and/or other selectable objects that are associated with lock marks and thus prohibited for matching. The lock marks associated with the locked separable ends may be transparent such that the pattern of the locked separable ends may be visible to the player. However, the lock marks may be configured to conceal the pattern of the locked separable ends such that they are not visible to the player. Altering the display of the computer game to lock one or more of the separable ends and thus prevent the user from matching them may significantly improve the technology of computer games by providing a dynamic lock feature which may significantly increase the challenge offered by the computer game thus increasing interest, attraction, and/or enthusiasm of the player in the computer game which may also increase player retention. According to some embodiments of the present invention the display of the computer game may be altered to present a prize pattern, for example, a puzzle, and/or the like which may be associated and indicative of one or more prizes, for example, an extra selectable objet, a game token, a coin, a skill, a key to a next level, and/or the like. The prize pattern may be indicative of a plurality of prize pieces which when populated in the prize pattern may jointly complete the pattern. The plurality of prize pieces may be initially distributed over at least some of the selectable objects according to a distribution computed according to one or more parameters, attributes, and/or conditions, for example, user attributes, game attributes, and/or the like, such that each of the at least some of the selectable objects is associated with a respective prize piece. In response to correct match made by the user between one or more of the separable ends of one of the selectable objects associated with a respective prize piece, the respective prize piece may be released and populated in the prize pattern. After all prize pieces are released, due to correct matching of their associated selectable objects, and populated in the prize pattern thus completing it, the prize(s) associated with the prize pattern may be allocated (awarded) to the user. Altering the display of the computer game to present one or more prize patterns and corresponding prize pieces which may be collected by the user to complete the pattern and win the associated prize(s) may significantly improve technology of computer games in terms of user experience by providing additional appeal, incentive and/or motivation for players (users) to engage with the computer game thus further increasing interest, attraction, and/or enthusiasm of the player (user) with the computer game which may also increase player retention. Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways. As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer program code comprising computer readable program instructions embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wire line, optical fiber cable, RF, etc., or any suitable combination of the foregoing. The computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. The computer readable program instructions for carrying out operations of the present invention may be written in any combination of one or more programming languages, such as, for example, assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Referring now to the drawings,FIG.1is a flowchart of an exemplary process of altering a display of a computer game to remove matching separable ends of selectable objects, according to some embodiments of the present invention. An exemplary process100may be executed to alter a display of a computer game displayed by a client device which displays, to a user associated with the client device, a plurality of selectable objects (items), for example, a tile (e.g., domino tile), a card, and/or the like each having a plurality of separable ends which may be detached and removed from the respective selectable object. Each of the separable ends of each of the selectable objects may be marked with one of a plurality of patterns such that the user may interact with the client device to indicate matches between separable ends of different selectable objects having similar patterns. In response to user input indicating one or more such matches, the display may be altered to remove the matched separable ends in case the match is correct. In particular, the display may be altered to break away the matched separable ends from their respective selectable objects. Reference is also made toFIG.2, which is a schematic illustration of an exemplary client device configured for altering a display of a computer game, according to some embodiments of the present invention. One or more exemplary client devices200, for example, a server, a desktop computer, a laptop computer, a Smartphone, a tablet, a proprietary client device and/or the like may be used by one or more associated users202to play one or more computer games. Each client device200may comprise a user interface210for interacting with the associated user202, a processor(s)212, and a storage214for storing data and/or code (program store). The user interface210may include one or more Human-Machine Interfaces (HMI) for interacting with the user202, for example, a keyboard, a pointing device (e.g., a mouse, a touchpad, a trackball, etc.), a screen, a touchscreen, a digital pen, a speaker, an earphone, a microphone and/or the like. The user may therefore operate one or more of the HMI interface of the user interface210to interact with the client device200, for example, play one or more of the computer games. The processor(s)212, homogenous or heterogeneous, may include one or more processing nodes and/or cores arranged for parallel processing, as clusters and/or as one or more multi core processor(s). The storage214may include one or more non-transitory persistent storage devices, for example, a Read Only Memory (ROM), a Flash array, a Solid State Drive (SSD), a hard drive (HDD) and/or the like. The storage214may also include one or more volatile devices, for example, a Random Access Memory (RAM) component, a cache and/or the like. The processor(s)212may execute one or more software modules such as, for example, a process, a script, an application, an agent, a utility, a tool and/or the like each comprising a plurality of program instructions stored in a non-transitory medium (program store) such as the storage232and executed by one or more processors such as the processor(s)230. Optionally, the processor(s)230may include one or more hardware elements integrated in the client device200to support one or more of the software modules executed by the client device200, for example, a circuit, a component, an Integrated Circuit (IC), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signals Processor (DSP), a Graphic Processing Unit (GPU) and/or the like. The processor(s)212may execute one or more software modules, for example, a process, a script, an application, an agent, a utility, a tool, an Operating System (OS), a service, a plug-in, an add-on and/or the like each comprising a plurality of program instructions stored in a non-transitory medium (program store) such as the storage214and executed by one or more processors such as the processor(s)212. Optionally, the processor(s)212includes, utilizes and/or applies one or more hardware elements available in the client device200, for example, a circuit, a component, an Integrated Circuit (IC), a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a Digital Signals Processor (DSP), a Graphic Processing Unit (GPU), and/or the like. The processor(s)212may therefore execute one or more functional modules utilized by one or more software modules, one or more of the hardware elements and/or a combination thereof. For example, the processor(s)212may execute a game engine220configured to execute the process100and/or part thereof for altering a display of a computer game to remove separable ends of selectable objects in response to correct match indications received from the user202. Optionally, one or more of the client devices200may further include a network interface216comprising one or more network adapters for connecting to a network204comprising one or more wired and/or wireless networks, for example, a Local Area Network (LAN), a Wireless LAN (WLAN, e.g. Wi-Fi), a Wide Area Network (WAN), a Metropolitan Area Network (MAN), a cellular network, the internet and/or the like. Via the network interface202, the client device200may connect to the network204and communicate with one or more remote network resources, such as, for example, one or more game servers, computing nodes, clusters of computing nodes, platforms, systems, services, and/or the like collectively designated game server206that is configured to provide gaming services to one or more of the client devices202. Optionally, the game server206may be utilized by one or more cloud computing services, platforms and/or infrastructures such as, for example, Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS) and/or the like provided by one or more vendors, for example, Google Cloud, Microsoft Azure, Amazon Web Service (AWS) and Elastic Compute Cloud (EC2), IBM Cloud, and/or the like. The game engine220executed by each of the client devices200may be therefore configured to execute the process100and/or part thereof depending on the deployment, architecture and/or implementations of the games played by the users202using their client devices200, specifically in terms of execution of the game at the client device200and/or at the remote game server206. For example, the game engine220locally executed by the client device200may execute one or more stand-alone games such that the game logic, engine, and/or the like as well as the graphical and/or user interfaces are all controlled by the local game engine220. The local game engine220may optionally communicate with one or more remote network resources, for example, the game server206to receive, retrieve, collect and/or otherwise obtain supplemental data relating to the game(s) which may be used to enrich the game but is not essential to the execution of the game. In another example, the client device200may communicate with the game server206which may execute the core, logic, engine, and/or the like of one or more games, for example, a web game, and/or the like. While the game engine itself may be executed remotely by the game server206, the game server206may instruct the client device200to control its graphical and/or user interfaces accordingly. In such implementations, the game engine220executed locally by the client device200, for example, a web browser, a mobile application, and/or the like may serve only as a local agent adapted to support interaction with the user202playing the game(s) (e.g., display, play sound, receive input, etc.) while the game itself is remotely executed by the game server206. In another example, execution of one or more games may be distributed between the client device200and the remote game server206such that each may execute part of the of the game(s). The graphical and/or user interfaces o the client device200however, may be naturally controlled by the local game engine220. For brevity, regardless of the exact deployment, architecture and/or implementation of the games, the game engine220is described herein after to control the entire game including its engine, logic, plot as well as the graphical and/or user interfaces. This however, should not be construed as limiting since as may become apparent to a person skilled in the art, the previously described deployment embodiments as well as other deployments may be applied to serve the game to the users202. Moreover, while the process100is described for a single game played by a single user202using a respective client device, it should not be construed as limiting since the same process may be expanded and scaled for a plurality of user202using a plurality of client devices200to play a plurality of games. As shown at102, the process100starts with the game engine220presenting a game to a user202via the user interface210of the client device200. In particular, the game engine220may display objects, items and/or assets of the game by instructing, operating, altering and/or otherwise controlling one or more screens of the client device200. Moreover, the user202may play the game by interacting with the game engine220via one or more input interfaces supported by the user interface210of the client device200. For example, the user202may operate a pointing device, a keyboard and/or the like of the client device200to select, for example, click, point, choose, adjust, write, and/or the like one or more of the game's objects, items and/or assets which are thus collectively designated selectable objects. The game may be a matching game, for example, a domino game, a memory game, and/or the like in which each of the plurality of selectable objects has a plurality of ends, for example, 2 ends, 3 ends, 4, ends, and/or the like each marked with one of a plurality of patterns where the user202has to match between object ends marked with similar and/or identical patterns. In particular, each of the plurality of selectable objects may initially have a plurality of separable ends, i.e., sections, segments and/or parts of the object which may be separated, broken away and removed from the object. Reference is now made toFIG.3AandFIG.3B, which are schematic illustrations of exemplary selectable objects of a computer game having multiple separable ends, according to some embodiments of the present invention. As seen inFIG.3A, a plurality of exemplary selectable objects300of a game may comprise a plurality of separable ends302each marked with one of a plurality of patterns. The number of separable ends302and the number, type and/or style of the patterns marked on the separable ends302may be defined, selected and/or set according to one or more goals, objectives, parameters, levels and/or the like of the game. The patterns may be marked on the selectable objects using one or more methods, techniques and/or technologies, for example, painted, printed, imprinted, engraved, embossed, debossed and/or the like. Moreover, the patterns may also include blank spaces, i.e., no marking on one or more of the separable ends302of one or more of the selectable objects300. Optionally, the patterns may include one or more wild card patterns which may be matched with a plurality of other patterns, and possibly all other patterns. For example, blank separable ends302may be used as wild cards meaning they may be matched with a plurality of other patterns and optionally with any of the patterns. In another example, the separable ends302of one or more selectable objects300may be marked with another wild card pattern, for example, a joker which may serve as wild card and matched a plurality of other patterns, optionally with any other pattern. For example, an exemplary selectable object300A may be configured as a tile, for example, a domino tile having two separable ends each marked with one of a plurality of patterns, for example, 6 patterns illustrating one dot, two dots, three dots, four dots, five dots, and six dots. For example, a first separable end302A1of the selectable object300A may be marked with a three dots pattern and a second separable end302A2of the selectable object300A may be marked with a five dots pattern. In another example, an exemplary selectable object300B may be configured as a cross having four separable ends each marked with one of a plurality of patterns, for example, geometric shapes. For example, a first separable end302B1of the selectable object300B may be marked with a hexagon shape, a second separable end302B2of the selectable object300B may be marked with a circle shape, a third separable end302B3of the selectable object300B may be marked with a triangle shape, and a fourth separable end302B4of the selectable object300B may be marked with a square shape. In another example, an exemplary selectable object300C may be configured to have three separable ends each marked with one of a plurality of patterns, for example, numbers. For example, a first separable end302C1of the selectable object300C may be marked with the number 3, a second separable end302C2of the selectable object300C may be marked with the number 11, and a third separable end302C3of the selectable object300C may be marked with the number 8. In another example, an exemplary selectable object300D may be configured to have six separable ends each marked with one of a plurality of patterns, for example, combinations of one or more geometric shapes. For example, a first separable end302D1of the selectable object300D may be marked with two circles, a second separable end302D2of the selectable object300D may be marked with four peripheral squares and a circle in the middle, a third separable end302D3of the selectable object300D may be marked with the two triangles and an ellipse, a fourth separable end302D4of the selectable object300D may be marked with a pentagon, a fifth separable end302D5of the selectable object300D may be marked with three hexagons, and a sixth separable end302D6of the selectable object300D may be marked with a circle and a triangle. As seen inFIG.3B, a display of the game, specifically a display (image) of the selectable objects300may be altered such that each of the separable ends302of each of the exemplary selectable objects300may be separated, i.e., broken away and removed from its selectable object300. For example, the separable end302A2may be separated from the selectable object300A. In another example, the separable end302B1may be separated from the selectable object300B. In another example, the separable end302C2may be separated from the selectable object300C. In another example, the separable end302D6may be separated from the selectable object300D. Moreover, the display (image) of the selectable objects300may be altered to display a separation line, a break line, and/or the like marking the separation of one or more of the separated and removed separable ends302. For example, as seen inFIG.3Bfor selectable objects300A,300B and300D, a zigzag line, a crooked line, a jagged line, and/or the like may be marked to indicate that separable ends302were separated and removed from the selectable objects300. Reference is made once again toFIG.1. As shown at104, the game engine220may receive user input, from the user operating the user interface210, comprising a match indication of one or more matches between separable ends302of two or more of the plurality of selectable objects300marked with similar and/or identical patterns. Specifically, the match indication received from the user202may indicate of one or more matches made by the user202playing the game between the separable ends302of the two or more of the plurality of selectable objects300which the user202determines, estimates, guesses, thinks and/or believes are marked with common patterns of the plurality of patterns employed in the game. As shown at106, the game engine220may analyze the match indication extracted from the user input to check whether the match indicated by the user202is correct or not. Specifically, the game engine220may identify a state of the plurality of selectable objects300of the computer game displayed by the client device200and may analyze the patterns marked on the separable ends302of the selectable objects300indicated by the user202as matching to determine whether the patterns marked on the indicated separable ends302are indeed similar and/or identical or not. As shown at108, which is a conditional step, in case the game engine220determines that the match indication made by the user202is correct, the process100may branch to110. In case the game engine220determines that the match indication made by the user202is incorrect, the process100may end or optionally return to step104to receive additional user input from the user202. As shown at110, since the match indication made by the user202is correct, the game engine may alter the display of the game, specifically the display of the selectable objects300indicated by the user202to have matching separable ends302by separating, breaking away and removing the matched separable ends302. Following the display alteration, the game engine220may optionally branch back to step104to receive additional user input from the user202. Reference is no made toFIG.4A,FIG.4BandFIG.4C, which are schematic illustrations of an exemplary display of a computer game altered according to an exemplary match between separable ends of selectable objects, according to some embodiments of the present invention. As seen inFIG.4A, a game engine such as the game engine220may display a plurality of selectable objects such as the selectable objects300of a game, for example, a domino like game played by a user such as the user202as described in step102of the process100. The displayed selectable objects300, for example, domino tiles may each have multiple separable ends such as the separable ends302, for example, two separable ends302each marked with one of a plurality of patterns, for example, domino dot patterns. For example, the game engine220may display a first selectable object300A1and a second selectable object300A2. The first selectable object300A1may have two separable ends, a first separable end302A11marked with a three dots patterns and a second separable end302A12marked with a five dots pattern. The second selectable object300A1may also have two separable ends, a first separable end302A21marked with a four dots patterns and a second separable end302A22marked with a three dots pattern. Assuming the game engine220receives user input from the user202, as described in step104of the process100, which comprises a match indication of a match between the separable end302A11of the selectable object300A1and the separable end302A22of the selectable object300A2. In such case, the game engine220may analyze the match indication and determine that the match is correct, as described in steps106and108of the process100, since both separable end302A11and separable end302A22are marked with the same pattern, namely the three dots pattern. As seen inFIG.4B, since the match is correct, the game engine220may alter the display (image) of the selectable objects300A1and300A2to separate and break away the matched separable ends302A11and302A22. The game engine220may further alter the display game, specifically the display of the selectable objects300A1and300A2to remove the matched separable ends302A11and302A22as seen inFIG.4Csuch that the selectable object300A1remains with only separable end302A12and the selectable object300A2remains with only separable end302A21. According to some embodiments of the present invention, the plurality of selectable objects300are arranged in a plurality of stacks such that each of the stacks may initially comprise multiple selectable objects300, specifically two or more selectable objects300which are stacked (layered) one on top the other. As they are stacked (layered) on each other a top most selectable object300in a stack may conceal the selectable object(s)300of lower layer(s) of the stack. In particular, each separable end302of a higher layer of the stack may conceal a corresponding separable end302of all lower layer(s) of the stack. Corresponding separable ends302refers to separable ends302of two or more different selectable objects300which are stacked one on top the other such that the position, location, and/or orientation of the corresponding separable ends302with respect to their (parent) selectable objects300is the same. Moreover, during the game after the game engine220alters the display of a top most selectable object300in a stack to remove a matched separable end302, the corresponding separable end302of selectable object300in the next lower layer is revealed such that the pattern marked on the corresponding lower layer separable end302becomes visible to the user202. Revealing patterns marked on separable ends302of lower layers' selectable objects300may enable additional match options for the user302to advance in the game. Reference is now made toFIG.5, which is a schematic illustrations of stacked selectable objects of a computer game, according to some embodiments of the present invention. A game engine such as the game engine220may display a plurality of selectable objects such as the selectable objects300arranged in a plurality of stacks500each initially comprising multiple selectable objects300, for example, three selectable objects300stacked one on top the other. For example, a stack500A may comprise a top layer selectable object300A1, a middle layer selectable object300A2and a bottom layer selectable object300A3, a stack500B may comprise a top layer selectable object300B1, a middle layer selectable object300B2and a bottom layer selectable object300B3, a stack500C may comprise a top layer selectable object300C1, a middle layer selectable object300C2and a bottom layer selectable object300C3, and a stack500D may comprise a top layer selectable object300D1, a middle layer selectable object300D2and a bottom layer selectable object300D3. Since the selectable objects300are stacked (layered) one on top the other, only the patterns marked on separable ends such as the separable ends302of the selectable object300at the top most layer may be visible while the patterns marked on the separable ends302of selectable objects300at lower layers of the stack(s) may be invisible. For example, only the patterns marked on separable ends302of the top most selectable objects300of each of the stacks500A,500B,500C and500D may be visible. Reference is also made toFIG.6A,FIG.6BandFIG.6C, which are schematic illustrations of an exemplary display of a computer game altered according to an exemplary match between separable ends of stacked selectable objects, according to some embodiments of the present invention. As seen inFIG.6A, the top layer selectable object300A1of the stack500A may have two separable ends302, a first separable end302A11marked with a three dots pattern and a second separable end302A12marked with a five dots pattern. The top layer selectable object300C1of the stack500C may also have two separable ends302, a first separable end302C11marked with a four dots pattern and a second separable end302C12marked with a three dots pattern. Assuming that during the game, the game engine220receives user input indicating of a match between the separable end302A11and the separable end302C12both marked with the three dots pattern. In such case, after determining that the match indication is correct and the patterns of the separable ends302A11and302C12are identical, the game engine220may alter the display of the selectable object300A1and the selectable object300C1, as seen inFIG.6B, to break away the matched separable ends302A11and302C12. As seen inFIG.6C, after the game engine220alters the display of the selectable object300A1and the selectable object300C1to remove the matched separable ends302A11and302C12, the patterns marked on the separable ends302of the next lower layer of the stacks500A and500C are revealed. For example, a six dots pattern marked on a separable end302A21of the selectable object300A2which is the next lower layer of the stack500A is reveled. In another example, a one dot pattern marked on a separable end302C22of the selectable object300C2which is the next lower layer of the stack500C is reveled. When a stack500becomes empty, i.e., all separable ends302of all its selectable object300are matched and removed, the game engine220may alter the display of the computer game and the selectable objects300to remove the empty stack. This means that following a successful match of a final (last) separable end302of a bottom most selectable object300of one or more of the stacks500, the game engine220may alter the display to remove the matched last separable end302and thus remove the empty stack from the display. It should be noted, that in case of two ended tiles such as, for example, the selectable objects300A, when the game engine alters the display to remove the final separable end302of a bottom most selectable object300of a stack500, the stack500may be in practice be also removed from the display. However, there may be cased, for example, for games using selectable objects such as, for example, selectable objects300B,300C and/or300D, at least part of the bottom most selectable object300may still remain after altering the display to remove the final separable end302of a bottom most selectable object300of a stack500. In such case, the game engine220may further alter the display to remove the remaining part(s) of the bottom most selectable object300of an empty stack500. Optionally, the game engine220may display each of the plurality of stacks500in association with a respective tray, for example, each stack500may be fixed on a respective tray. As such, following a match of a final (last) separable end302of a bottom most selectable object300of one or more of the stacks500, such that the respective stack500is now empty, the game engine220may alter the display to display an empty tray and may further alter the display to remove the empty tray. Reference is now made toFIG.7A,FIG.7B,FIG.7CandFIG.7D, which are schematic illustrations of an exemplary display of a computer game altered to remove empty stacks of selectable objects, according to some embodiments of the present invention. FIG.7Aillustrates a display of an exemplary initial state of a plurality of selectable objects such as the selectable objects300of a matching game displayed by a client device such as the client device200. Each of the selectable objects300has a plurality of separable ends such as the separable ends302marked with patterns which may be matched to patterns marked on other separable ends302of other selectable objects300. As seen, the selectable objects300are stacked in a plurality of stacks such as the stacks500, for example, stacks500A,500B,500C and500D such that only the patterns marked on the selectable objects300at the top most layer of each stack500may be visible and available for matching (matchable). FIG.7Billustrates a display an exemplary a later state of the selectable objects300at a later stage in the game after a plurality of successful matches were verified by a game engine such as the game engine220. Following each successful match, the game engine220may alter the display to remove the matched separable ends302. As seen, while only a single successful match was made for the stack500D, the separable ends302of all selectable objects300of the stack500B were separated, broken away and removed meaning that the stack500B is empty. The game engine220may therefore alter the display to remove the empty stack500B. As seen inFIG.7C, each of the stacks500may be associated with a respective tray700, for example, fixed, overlaid, contained, and/or the like. For example, the stack500A may be fixed on a tray700A, the stack500B may be fixed on a tray700B, the stack500C may be fixed on a tray700C, and the stack500D may be fixed on a tray700D. As seen inFIG.7D, after the game engine220alters the display to remove the final separable end302of the bottom most selectable object300of the stack500B the stack500B is empty such that there are no selectable objects300in the tray700B. In such case, the game engine220may further alter the display to remove the empty tray700B. According to some embodiments of the present invention, the game engine220may alter the display of the game to present one or more new stacks500of selectable objects300. Specifically, the game engine220may alter the display of the game such that the new stack(s)500appear following the removal of one or more empty stacks500. The game engine220may alter the display to display one or more new stacks500in the same location in which one or more empty stacks500were located before the display was altered to remove them. However, the game engine220may alter the display to display one or more new stacks500in one or more new locations. Moreover, the game engine220may alter the display to add one or more new stacks500such that the new stack(s)500appear to form a new level in the computer game. For example, the game engine220may alter the display to shift the (images) stacks500displayed by the client device to one or more directions, for example, left-right, up-down, and/or a combination thereof, and add one or more new stacks500in areas of the display (screen) which are now clear (empty) thus delivering and/or inducing a sense and feeling of movement and advancement through levels of the computer game. Reference is now made toFIG.8AandFIG.8B, which are schematic illustrations of an exemplary display of a computer game altered to reveal new stacks of selectable objects following removal of empty stacks, according to some embodiments of the present invention. FIG.8Aillustrates a display of an exemplary first (earlier) state of a plurality of selectable objects such as the selectable objects300of a matching game displayed by a client device such as the client device200. The selectable objects300may be arranged in a plurality of stacks such as the stacks500, for example, stacks500A,500B,500C,500D,500E and500F each comprising multiple selectable objects300, for example, three. Each of the selectable objects300has two separable ends such as the separable ends302marked with matchable patterns. FIG.8Billustrates a display of an exemplary second (later) state of the plurality of selectable objects300displayed by the client device200after multiple successful matches were made between separable ends302of some of the selectable objects300and a game engine such as the game engine220may alter the display to remove the matched separable ends302. As described herein before, the game engine220may further alter the display to remove empty stacks500in which all the selectable objects300were correctly matched and their separable ends302were removed. The game engine220may also alter the display to show one or more new stacks500. For example, the game engine220may alter the display of the computer game to show one or more new stacks500after removing one or more empty stacks500. Moreover, the game engine220may alter the display such that the new stack(s)500may appear to form new levels in the computer game. For example, as seen inFIG.8AandFIG.8B, between the earlier state and the later state the left most stack500A becomes empty, and the game engine220may therefore alter the display to remove it. As seen inFIG.8B, the game engine220may further alter the display to shift the remaining stacks500B,500C,500D,500E and500F to the left and add (introduce) a new stack500G in the right most location of the screen thus inducing a sense of movement and/or advancement to the right to unveil and/or enter a new level of the computer game. According to some embodiments of the present invention the game engine220may alter the display of the computer game, specifically the display (image) of one or more of the plurality of selectable objects300to lock one or more of their separable ends302thus prohibiting the locked separable ends302for matching and limiting the match options for the user202playing the matching game. Reference is now made toFIG.9, which is a flowchart of an exemplary process of altering a display of a computer game to limit match options of selectable objects having separable ends, according to some embodiments of the present invention. An exemplary process900may be executed by a game engine such as the game engine220to alter a display of a computer game, in particular a matching game, displayed by a client device such as the client device200to a user such as the user202using the client device200to play the game, to limit matching options for the user202. The matching game may challenge the user202to match between patterns marked on separable ends such as the separable ends302of a plurality of selectable objects (items) of the computer game, for example, a domino tile, a card, and/or the like each having a plurality of separable ends302which may be detached and removed from their (parent) respective selectable objects300. As shown at902, the process900starts with the game engine220presenting a game to a user202via a user interface such as the user interface210of the client device200. In particular, the game engine220may display objects, items and/or assets of the game by instructing, operating, altering and/or otherwise controlling one or more screens of the client device200. Moreover, the user202may play the game by interacting with the game engine220via one or more input interfaces supported by the user interface210of the client device200. For example, the user202may operate a pointing device, a keyboard and/or the like of the client device200to select, for example, click, point, choose, adjust, write, and/or the like one or more of the game's objects, items and/or assets which are thus collectively designated selectable objects. The game may be a matching game, for example, a domino game, a memory game, and/or the like in which each of the plurality of selectable objects has a plurality of ends, for example, 2 ends, 3 ends, 4, ends, and/or the like each marked with one of a plurality of patterns where the user202has to match between object ends marked with similar and/or identical patterns. As shown at904, the game engine220may select one or more of the plurality of selectable objects300displayed by the client device200. The game engine220may apply one or more methods for selecting the selectable objects300. For example, the game engine220may select one or more of the selectable objects300arbitrarily and/or randomly, for example, using one or more random number generators which may be operated to generate one or more random numbers that may be mapped to identifiers of one or more of the selectable objects300. In another example, the game engine220may select one or more of the selectable objects300according to one or more predefined selection patterns and/or methodologies, for example, a predefined location, a predefined order, and/or the like. As shown at906, the game engine220may select one or more of the separable ends302of the selected selectable object(s)300. As described in step904, the game engine220may apply one or more methods for selecting the separable end(s)302, for example, arbitrary selection, random selection, using predefined selection patterns and/or the like. As shown at908, the game engine220may alter the display of the selected selectable object(s)300by associating a lock mark with the selected separable end(s)302of the selected selectable object(s)300. The lock mark may indicate to the user202that the associated separable end(s)302is prohibited for matching with the separable end(s)302of one or more of the other selectable objects300displayed to the user202on the screen of the client device200. The game engine220may apply one or more techniques, and/or visualization modes for configuring, shaping, and/or illustrating the lock marks. For example, the game engine220may alter the display to add one or more lock marks covering the selected separable end(s)302. In another example, the game engine220may alter the display to add one or more lock marks attached to one or more of the selected separable end(s)302. Moreover, the game engine220may alter the display such that the lock mark(s) associated with one or more of the selected separable end(s)302does not conceal the pattern marked on the respective separable end302of the respective selectable object300thus making the pattern visible to the user202. However, the game engine220may optionally alter the display such that the lock mark(s) associated with the selected separable end(s)302300conceals the pattern marked on the respective separable end302of the respective selectable object300thus making the pattern invisible to the user202. Reference is now made toFIG.10, which is a schematic illustration of an exemplary display of a computer game showing exemplary lock marks in association with separable ends of selectable objects of a computer game, according to some embodiments of the present invention. As seen in1002, an exemplary selectable object300A such as the selectable objects300of a computer game, for example, a domino tile of a domino based matching game displayed by a client device such as the client device200to a user such as the user202using the client device200to play the matching game. As described herein before, the selectable object300A may have a plurality of separable ends such as the separable ends302, for example, two separable ends302A1and302A2marked with three dots and five dots patterns respectively. A game engine such as the game engine220may alter the display of the computer game, specifically the display of the selectable object300A to associate a lock mark with one or more of its separable ends302A1and302A2thus prohibiting the user202from matching the locked separable ends302A1and/or302A2. As seen in1004, an exemplary lock mark1000A may be displayed in association with the separable end302A2in solid mode such that the pattern marked on the separable end302A2is invisible. As seen in1006, an exemplary lock mark1000B may be displayed in association with the separable end302A2in at least partially transparent mode such that the pattern marked on the separable end302A2is visible through the lock mark1000B. As seen in1006, an exemplary lock mark1000C may be displayed in association with the separable end302A2such that it does not cover the pattern marked on the separable end302A2and the pattern is visible. Reference is made once again toFIG.9. As shown at910, the process900may branch back to repeat steps904-908such that the game engine220may repeat selection of one or more of the selectable objects300of the computer game and their separable ends302and alter the display of the selected selectable object(s)300accordingly by associating a lock mark with the selected separable end(s)302. The game engine220may apply one or more modes, methods and/or techniques for repeating selection of one or more separable ends302of one or more selectable objects300which are associated with lock marks and thus prohibited for matching. For example, the game engine220may periodically select one or more separable ends302of one or more of the plurality of selectable objects300and alter the display accordingly to associate the selected separable end(s)302with a lock mark. For example, the game engine220may select one or more separable ends302of one or more of the plurality of selectable objects300and alter the display accordingly every predefined and/or randomly selected time period, for example, every minute, every two minutes, every five minutes, and/or the like. In another example, the game engine220may select one or more separable ends302of one or more of the plurality of selectable objects300and alter the display accordingly every predefined and/or randomly selected number of moves (turns) of the user202playing the computer game, for example, every move, every two moves, every five moves, and/or the like. The game engine220may apply one or more selection rules, methods and/or modes for reselecting selectable object(s)300and their separable end(s)302to associate them with lock mark(s) thus prohibiting them for matching by the user202and alter the display accordingly. For example, the game engine220may select one or more separable ends302which are not currently locked, i.e., associated with a lock mark. In another example, the game engine220may select one or more separable ends302among all of the selectable objects300regardless of which is currently locked or not. In another example, the game engine220may select one or more separable ends302of one or more selectable objects300which have other separable end(s)302that are currently locked. In other words, the game engine220may switch between separable ends302of the same selectable object(s)300and associate another one or more separable ends302of the same selectable object(s)300with the lock mark. Reference is now made toFIG.11, which is a schematic illustration of an exemplary display of a computer game altered to periodically move exemplary lock marks between separable ends of selectable objects of a computer game, according to some embodiments of the present invention. As seen in1102, an exemplary selectable object300A such as the selectable objects300of a computer game, for example, a domino tile of a domino based matching game displayed by a client device such as the client device200to a user such as the user202using the client device200to play the matching game. As described herein before, the selectable object300A may have a plurality of separable ends such as the separable ends302, for example, two separable ends302A1and302A2marked with three dots and five dots patterns respectively. As seen1104, a game engine such as the game engine220may alter the display of the computer game, specifically the display of the selectable object300A to associate a lock mark with one or more of its separable ends302, for example, the second separable end302A2thus prohibiting the user202from matching the locked separable end302A2. As seen1106, the game engine220may periodically alter the display of the selectable object300A, for example, after every move made by the user202to associate a lock mark with another one of its separable ends302, for example, the first separable end302A1thus prohibiting the user202from matching the locked separable end302A1. As seen1108, when periodically altering the display of the selectable object300A, the game engine220may further alter the display of the selectable object300A to associate lock marks with both the separable ends302A1and302A2of the selectable object300A. According to some embodiments of the present invention the game engine220may alter the display of the computer game to present one or more prize patterns each associated with one or more prizes which may be allocated (rewarded) to the user202playing the computer game. Alteration of objects of the compute game as used herein refers to alteration of the image of the computer game objects displayed by the client device200, for example, on a screen (display), projection, and/or the like. Each prize pattern may be indicative of a plurality of prize pieces which jointly construct the prize pattern such that when all the prize pieces are populated they form a complete prize pattern. The prize pieces may be distributed over one or more of the selectable objects which are presented to the user for matching their separable ends as described herein before. When the separable end of a certain selectable object which is associated with one of the prize pieces is successfully matched by the user202with a separable end of another selectable object (marked with a similar pattern), the respective prize piece is released and may be populated in its respective prize pattern. After a prize pattern is populated with all its prize pieces such that the prize pattern is complete, the user202may be awarded one or more prizes associated with the complete prize pattern. Reference is now made toFIG.12, which is a flowchart of an exemplary process of altering a display of a computer game to distribute prize pieces over selectable objects of a computer game and populating released prize pieces in a prize pattern, according to some embodiments of the present invention. An exemplary process1200may be executed by a game engine such as the game engine220to alter a display of a computer game, in particular a matching game, displayed by a client device such as the client device200to a user such as the user202using the client device200to play the game. In particular, the game engine220may execute the process1200to associate selectable objects with prize pieces which when released may populate a prize pattern associated with one or more game prizes the user202may gain (earn) such that when the prize pattern is complete the prize(s) may be allocated (rewarded) to the user202. The prize(s) which may be associated with each prize pattern may include, for example, one or more extra selectable objects300which may expand the options of the user202to identify potential matches. In another example, the prize(s) may include one or more extra wildcard selectable objects300which may further expand the match options for the user202. In another example, the prize(s) may include one or more keys for entering one or more higher and/or next levels of the computer game. In another example, the prize(s) may include one or more game tokens of the computer game, for example, a coin, a feature, a gadget, a skill, and/or the like which may be used by the user202to advance in the computer game or even trade the tokens for real-world assets. As described herein before, the matching game may challenge the user202to match between patterns marked on separable ends such as the separable ends302of a plurality of selectable objects (items) such as the selectable objects300of the computer game, for example, a domino tile, a card, and/or the like each having a plurality of separable ends302which may be detached and removed from their (parent) respective selectable objects300. As shown at1202, the process1200starts with the game engine220presenting a computer game to a user202via a user interface such as the user interface210of the client device200. In particular, the game engine220may display objects, items and/or assets of the game by instructing, operating, altering and/or otherwise controlling one or more screens of the client device200. The user202may play the game by interacting with the game engine220via one or more input interfaces supported by the user interface210of the client device200. For example, the user202may operate a pointing device, a touch screen, a keyboard and/or the like of the client device200to select, for example, click, point, choose, adjust, write, and/or the like one or more of the game's objects, items and/or assets, for example, the selectable objects300. As shown at1204, the game engine220may cause the client device200to alter the display (screen) of the computer game to display (present) one or more prize patterns each associated with one or more prizes. In particular, each prize pattern may be indicative of a plurality of prize pieces which jointly compose the prize pattern, specifically visually indicative of the plurality of prize pieces. For example, the prize pattern may outline contour lines of the plurality of prize pieces. In another example, the prize pattern may indicate a number of prize pieces which joint together may compose the complete prize pattern. Moreover, while the plurality of prize pieces may be identical such that each prize piece may be counted or populated in any position (location, place, etc.) of the prize pattern, the prize pattern may optionally form a puzzle where each of the plurality of prize pieces is a puzzle piece which matches a respective corresponding position (location) outlined accordingly in the puzzle prize pattern. Optionally, one or more of the plurality of prize pieces of a respective prize pattern, for example, a puzzle, may be marked with a respective one of a plurality of partial images of one or more of the prizes associated with the respective prize pattern. As such, when populated in their corresponding positions in the prize pattern, for example, in the puzzle, the plurality of partial images form a complete image, for example, an image of the prize(s) associated with the respective prize pattern. The game engine220may define, select and/or set complexity of one or more prize patterns, for example, a number of the plurality of prize pieces, a contour line complexity, and/or the like according to at least one or more parameters, attributes, and/or rules. For example, the game engine220may set the complexity of the prize pattern according to one or more game attributes of the computer game, for example, a current level of the game, and/or the like. For example, in a low level of the game, the game engine220may create a simple prize pattern, for example, a puzzle pattern outlining two prize pieces while in a higher more advance level of the game, the game engine220may create a more complex prize pattern, for example, a puzzle pattern outlining six or eight prize pieces. In another example, the game engine220may set the complexity of the prize pattern according to one or more user attributes of the user202, for example, a proficiency, a skill, an experience of the user202in the computer game, and/or the like. For example, the game engine220may create a simple prize pattern, for example, a puzzle pattern outlining two prize pieces for a novice user202while creating a significantly more complex prize pattern, for example, a puzzle pattern outlining six or eight prize pieces for an advanced and/or professional user202. As shown at1206, the game engine220may compute a distribution for distributing the plurality of prize pieces over at least some of the plurality of selectable objects300. The game engine220may apply one or more methods, and/or techniques for computing the distribution of the prize pieces over the selectable objects300, i.e., for associating each of the plurality of prize pieces with a respective one of at least some of the selectable objects300presented and hence available to the user202for matching. For example, the game engine220may distribute the plurality of prize pieces according to a random distribution such that selectable objects300available to the user202for matching may be selected randomly and associated each with the a respective prize piece. To compute the random distribution, the game engine220may use one or more random and/or pseudorandom components (e.g., random number generator, etc.), mechanisms (e.g., pseudorandom software agent and/or application, etc.), services (e.g., online service, etc.) and/or the like available at the client device200. In another example, the game engine220may compute the distribution according to one or more game attributes of the computer game, for example, a current level of the game, a number of selectable objects300available to the user202for matching, a layout of the selectable objects300in the current game level, and/or the like. For example, in a low level of the computer game, the game engine220may distribute the prize pieces over selectable objects300which may be easily accessible to the user202, i.e., accessed in a low number of game plays such that the user202may obtain the prize pieces in a significantly early stage when playing the current game level. In contrast, in a higher level of the game, the game engine220may distribute the prize pieces over selectable objects300which may be less accessible to the user202. The user202may therefore obtain the prize pieces in a significantly later and/or advanced stage of the current game level. In another example, assuming there is a limited (low) number of selectable objects300presented and available to the user202for matching. In such case, the game engine220may assign a common (equal) weight to each selectable object300such that all selectable objects300have the same selection probability and distribute the prize pieces over the selectable objects300according to the common weights. In another example, assuming there is a large number of selectable objects300presented and available to the user202for matching. In such case, the game engine220may assign a different weights to the selectable objects300, for example, higher weights to easily accessible selectable objects and lower weights to less accessible selectable objects300. The game engine220may then compute distribution of the prize pieces over the selectable objects300according to the selected weights assignment. In another example, the game engine220may compute the distribution according to one or more user attributes of the user202, for example, skill, experience, gained assets (prizes, tokens, etc.), time of engagement with the computer game, and/or the like. For example, the game engine220may distribute the prize pieces over selectable objects300which may be easily accessible to the user202for a novice user202such that the user202may obtain the prize pieces in a significantly early stage when playing the current game level. However, for a skilled user202, the game engine220may distribute the prize pieces over selectable objects300which may be less accessible to the user202such that the user202may need to be more proficient in order to obtain the prize pieces. As shown at1208, the game engine220may cause the client device to alter the display (image) of at least each of the at least some selectable objects300which are associated with the prize pieces according to the computed distribution, i.e., the selectable objects300over which the prize pieces where selected to be distributed. In particular, the game engine220may cause alteration of the display (image) of the selectable objects300associated with prize pieces to further display an image of the prize pieces such that each prize piece may be at least partially visible to the user202. As such, one or more visual attributes of each prize piece, for example, an outline (contour), shape, textures, and/or the like may be visible. Moreover, in case one or more prize pieces are marked with respective partial images, the game engine220may cause alteration of the display to display each prize piece such that at least part of the partial image marked on the respective prize piece is visible to the user202. Reference is now made toFIG.13, which an exemplary screenshot of an exemplary computer game which is altered to display an exemplary prize pattern indicative of a plurality of prize pieces distributed over a plurality of selectable objects of the computer game, according to some embodiments of the present invention. An exemplary screenshot1300of a computer game controlled by a game engine such as the game engine220executed by a client device such as the client device200may present (display) a plurality of exemplary selectable objects300E such as the selectable objects300(e.g., tiles) to a user202for matching between separable ends such as the separable ends of two or more selectable objects300E marked with similar patterns. The game engine220may further cause the client device200to alter the display (screen) of the computer game, as described in step1204, to display (present) one or more prize patterns, for example, a prize pattern1302associated with one or more prizes. The presented prize pattern1302may outline contour lines of a plurality of prize pieces, for example, four prize pieces. As seen, the plurality of prize pieces may be identical such that each of the prize pieces may be positioned (placed) populated in any of the positions(locations, slots) of the exemplary prize pattern. However, as described herein before, in some embodiments, each of the prize pieces may have a unique shape, contour, and/or the like such that each prize piece may be populated only in a designated specifically matching position (location) in the prize pattern. The game engine220may cause the client device200to alter the display (screen) of the computer game, as described in step1204of the process1200, to display (present) a plurality of prize pieces1304each associated with a respective one of at least some of the selectable objects300E selected according to a distribution computed as described in step1206of the process1200. For example, a first prize piece1304(1) may be associated with selectable objects300E1, a second prize piece1304(2) may be associated with selectable objects300E2, a third prize piece1304(3) may be associated with selectable objects300E3, and a fourth prize piece1304(4) may be associated with selectable objects300E4. As seen in screenshot1300, each of the four prize piece1304(1),1304(2),1304(3), and1304(4) is at least partially visible to the user202such that one or more visual attributes of the respective prize piece1304, for example, contour, shape, texture, partial image, and/or the like are at least partly visible to the user202. Reference is made once again toFIG.12. As shown at1210, the game engine220may cause the client device200to alter the (image of) one or more of the prize pattern(s), responsive to each correct match between one or more separable ends302of one or more of the at least some selectable objects300each associated with a respective prize piece and one or more separable ends of one or more other selectable objects300. Specifically, as described herein before, the match is made and evaluated accordingly (correct or incorrect) between the patterns marked on the separable ends302of different selectable objects300. In particular, responsive to a match between the separable end(s)302of one of the at least some selectable objects300, the prize piece associated with this selectable object300may be released. For each match resulting in release of a respective one of the prize pieces, the game engine220may cause the client device200to alter the (image of the) prize pattern(s) relating to the released prize piece by populating the respective prize piece302associated with the matched selectable object300, i.e., the released prize piece in the prize pattern. As shown at1212, responsive to populating all of the prize pieces in a respective prize pattern, the prize associated with the respective prize pattern may be allocated (awarded) to the user202. This means that after the user202matches all the selectable objects300associated with prize pieces of a certain prize pattern and thus releases all prize pieces, the prize pattern may be fully populated and the prize may be allocated to the user202as reward. The game engine220may optionally cause the client device200to alter the computer game display to reflect the prize(s) allocated to the user202for releasing all prize pieces and populating the entire prize pattern. Optionally, one or more of the prize patterns may conceal at least part of one or more of the selectable objects300. In such case, responsive to populating all of the prize pieces in a respective prize pattern, the respective prize pattern may be removed and one or more at least partially concealed selectable objects300may be revealed (become visible) to the user202. Reference is now made toFIG.14A,FIG.14B, andFIG.14Care exemplary screenshots of an exemplary computer game which are altered to populate a released prize piece in a prize pattern, according to some embodiments of the present invention. An exemplary screenshot1400(FIG.14A) of a computer game controlled by a game engine such as the game engine220executed by a client device such as the client device200may present (display) a plurality of exemplary selectable objects300E such as the selectable objects300(e.g., tiles) to a user202for matching between separable ends such as the separable ends of two or more selectable objects300E marked with similar patterns. The game engine220may further cause the client device200to alter the display (screen) of the computer game to display (present) one or more prize patterns, for example, a prize pattern1302A associated with one or more prizes which may outline contour lines of a plurality of prize pieces, for example, four prize pieces. As seen, a first prize piece1304(1) may be associated with a selectable object300E1having a first separable end302E11marked with a certain pattern, for example, one dot. A pile of selectable object1410comprising a plurality of selectable objects300E of the user202may include a selectable objects300E2having a first separable end302E21marked also with a one dot pattern. As seen in exemplary screenshot1402(FIG.14B), the user202may indicate a match between the first separable end302E11of the selectable object300E1and the first separable end302E21of the selectable object300E2included in the pile1410. In response, since these two separable ends302are marked with the same pattern, namely one dot, the game engine220may determine that the match is correct. In response to the correct match, the first prize piece1304(1) may be released and populated in the prize pattern1302A as seen in exemplary screenshot1404(FIG.14C). The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. It is expected that during the life of a patent maturing from this application many relevant systems, methods and computer programs will be developed and the scope of the terms matching game, and game engine architecture are intended to include all such new technologies a priori. As used herein the term “about” refers to ±10%. The terms “comprises”, “comprising”, “includes”, “including”, “having” and their conjugates mean “including but not limited to”. This term encompasses the terms “consisting of” and “consisting essentially of”. The phrase “consisting essentially of” means that the composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method. As used herein, the singular form “a”, “an” and “the” include plural references unless the context clearly dictates otherwise. For example, the term “a compound” or “at least one compound” may include a plurality of compounds, including mixtures thereof. The word “exemplary” is used herein to mean “serving as an example, an instance or an illustration”. Any embodiment described as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments. The word “optionally” is used herein to mean “is provided in some embodiments and not provided in other embodiments”. Any particular embodiment of the invention may include a plurality of “optional” features unless such features conflict. Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range. Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases “ranging/ranges between” a first indicate number and a second indicate number and “ranging/ranges from” a first indicate number “to” a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals there between. It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements. Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. It is the intent of the applicant(s) that all publications, patents and patent applications referred to in this specification are to be incorporated in their entirety by reference into the specification, as if each individual publication, patent or patent application was specifically and individually noted when referenced that it is to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting. In addition, any priority document(s) of this application is/are hereby incorporated herein by reference in its/their entirety.
81,349
11857883
DETAILED DESCRIPTION Although the following detailed description contains many specific details for the purposes of illustration, anyone of ordinary skill in the art will appreciate that many variations and alterations to the following details are within the scope of the present disclosure. Accordingly, the aspects of the present disclosure described below are set forth without any loss of generality to, and without imposing limitations upon, the claims that follow this description. Generally speaking, the various embodiments of the present disclosure describe systems and methods providing real-time assistance during game play of a player playing a gaming application by connecting the player to an expert over a communication session. For example, when a player gets stuck on a part of a gaming application, the player can request help, such as via a gaming console or mobile application (e.g., executed on a mobile device) through a user interface. The player is then connected to a more experienced player (e.g., the expert) over a communication session, wherein the expert provides gaming assistance (e.g., the expert can help the player get unstuck). Experts can mark themselves “available” at any time, for any game they have played. An expert is generally a player who has registered and/or qualified as an expert. When a player requests help for that game, the request is sent to the available experts who are most likely to be able to help. The expert is matched to the player and can be connected via a live help session, or via a prior recording of a help session. The expert is selected based on their ability that relates to the context of the game that the player is having difficulty with. In one implementation, the first expert to accept the help request starts a help session with the player, wherein the matching of a live expert to the player is like a ride hailing Uber® model that is configured for providing live help sessions. In order to connect the player to an expert who can help, critical data about the player's current session is captured, such as quest, level, loadout, location, skills, etc. The player is then paired with an expert who ideally has already beaten that part of the game (e.g., which the player is currently playing and needs assistance), and ideally who did it with a similar configuration. During the help session, the expert can provide guidance via text, voice, video, and/or embedded video from a web, mobile, or console interface. In another implementation, the player is connected to an expert via a recorded help session. The recorded help session may provide the best assistance for the given query and/or game context, and as such instead of connecting the player to an expert via a live help session, the player is connected to an expert via a recorded help session. In one embodiment, the recorded help sessions for a given query and/or game context are ranked based on user/player feedback, and are selected based on the rankings. With the above general understanding of the various embodiments, example details of the embodiments will now be described with reference to the various drawings. Throughout the specification, the reference to “gaming application” is meant to represent any type of interactive application that is directed through execution of input commands. For illustration purposes only, an interactive application includes applications for gaming, word processing, video processing, video game processing, etc. Further, the terms video game and gaming application are interchangeable. FIG.1Aillustrates a system10used for providing real-time assistance during game play of a player playing a gaming application by connecting the player to an expert over a communication session, in accordance with one embodiment of the present disclosure. For example, the assistance may be provided through a user interface configured to support the game play. The gaming application can be executing on a local computing device or over a cloud game network, in accordance with one embodiment of the present disclosure. As shown inFIG.1A, the gaming application may be executing locally at a client device100of the user5, or may be executing at a back-end game executing engine211operating at a back-end game server205of a cloud game network or game cloud system. The game executing engine211may be operating within one of many game processors201of game server205. In either case, the cloud game network is configured to provide real-time assistance to player by connecting the player to an expert over a communication session in a help session. The help session is conducted during the game play of the player playing the gaming application. The user interface110at client device100may support the help session, such that the player is able to request help through the user interface110, and interact with (e.g., view, hear, present, etc.) the help in the help session provided in the user interface110. Further, the gaming application may be executing in a single-player mode, or multi-player mode, wherein embodiments of the present invention provide for multi-player enhancements (e.g., assistance, communication, etc.) to both modes of operation. In some embodiments, the cloud game network may include a plurality of virtual machines (VMs) running on a hypervisor of a host machine, with one or more virtual machines configured to execute a game processor module201utilizing the hardware resources available to the hypervisor of the host in support of single player or multi-player video games. In other embodiments, the cloud game network is configured to support a plurality of local computing devices supporting a plurality of users, wherein each local computing device may be executing an instance of a video game, such as in a single-player or multi-player video game. For example, in a multi-player mode, while the video game is executing locally, the cloud game network concurrently receives information (e.g., game state data) from each local computing device and distributes that information accordingly throughout one or more of the local computing devices so that each user is able to interact with other users (e.g., through corresponding characters in the video game) in the gaming environment of the multi-player video game. In that manner, the cloud game network coordinates and combines the game plays for each of the users within the multi-player gaming environment. As shown, system10includes a game server205executing the game processor module201that provides access to a plurality of interactive gaming applications. Game server205may be any type of server computing device available in the cloud, and may be configured as one or more virtual machines executing on one or more hosts, as previously described. For example, game server205may manage a virtual machine supporting the game processor201. Game server205is also configured to provide additional services and/or content to user5. For example, game server is configurable to connect a player playing a gaming application to an expert over a communication session to provide real-time assistance, wherein the game server is configured to receive a request for assistance, match the player with an appropriate expert, and establish the help session that connect the player to the expert in real-time during the game play of the player, as will be further described below. Client device100is configured for requesting access to a gaming application over a network150, such as the internet, and for rendering instances of video games or gaming applications executed by the game server205and delivered to the display device12and/or head mounted display (HMD)102associated with a user5. For example, user5may be interacting through client device100with an instance of a gaming application executing on game processor201. Client device100may also include a game executing engine111configured for local execution of the gaming application, as previously described. The client device100may receive input from various types of input devices, such as game controllers6, tablet computers11, keyboards, and gestures captured by video cameras, mice, touch pads, etc. Client device100can be any type of computing device having at least a memory and a processor module that is capable of connecting to the game server205over network150. Some examples of client device100include a personal computer (PC), a game console, a home theater device, a general purpose computer, mobile computing device, a tablet, a phone, or any other types of computing devices that can interact with the game server205to execute an instance of a video game. In embodiments, the HMD102can be configured to perform the functions of the client device100. Client device100is configured for receiving rendered images, and for displaying the rendered images on display12and/or HMD102. For example, over a network150the rendered images may be delivered by an instance of a gaming application executing on game executing engine211of game server205in association with user5. In another example, through local game processing, the rendered images may be delivered by the local game executing engine111. In either case, client device100is configured to interact with the executing engine211or111in association with the game play of user5, such as through input commands that are used to drive game play. Further, client device100is configured to interact with the game server205to capture and store one or more game contexts of the game play of user5when playing a gaming application, wherein each game context includes information (e.g., game state, user information, etc.) related to the game play. More particularly, game processor201of game server205is configured to generate and/or receive game context of the game play of user5when playing the gaming application. In another implementation, game contexts may be generated by the local game execution engine111on client device100, outputted and delivered over network150to game processor201. In addition, game contexts may be generated by game executing engine211within the game processor201at the cloud network, such as through the game context generator122. Game contexts may be locally stored on client device100and/or stored at the context profiles database142of the game server205. Each game context includes metadata and/or information related to the game play. Game contexts may be captured at various points in the progression of playing the gaming application, such as in the middle of a level. For illustration, game contexts may help determine where the player (e.g., character of the player) has been within the gaming application, where the player is in the gaming application, what the player has done, what assets and skills the player or the character has accumulated, what quests or tasks are presented to the player, and where the player will be going within the gaming application. Further, the metadata and information in each game context may provide and/or be analyzed to provide support related to the game play of the user, such as when matching a player requesting help during his or her game play to an expert, wherein the game play has a particular context related to the request for help, and the selected expert is best suited to providing help for that context. Specifically, based on the game contexts, client device100is configured to interact with game server205to display a user interface that is able to connect a player playing a gaming application to an expert through a communication session to provide real-time assistance during game play of the player. More particularly, game context also includes game state data that defines the state of the game at that point. For example, game state data may include game characters, game objects, game object attributes, game attributes, game object state, graphic overlays, location of a character within a gaming world of the game play of the user5, the scene or gaming environment of the game play, the level of the gaming application, the assets of the character (e.g., weapons, tools, bombs, etc.), the type or race of the character (e.g., wizard, soldier, etc.), the current quest and/or task presented to the player, loadout, skills set of the character, etc. In that manner, game state data allows for the generation of the gaming environment that existed at the corresponding point in the video game. Game state data may also include the state of every device used for rendering the game play, such as states of CPU, GPU, memory, register values, program counter value, programmable DMA state, buffered data for the DMA, audio chip state, CD-ROM state, etc. The game state data is stored in game state database145. Also, game context may include user and/or player information related to the player. Generally, user/player saved data includes information that personalizes the video game for the corresponding player. This includes information associated with the player's character, so that the video game is rendered with a character that may be unique to that player (e.g., shape, race, look, clothing, weaponry, etc.). In that manner, the user/player saved data enables generation of a character for the game play of a corresponding player, wherein the character has a state that corresponds to the point in the gaming application associated with the game context. For example, user/player saved data may include the skill or ability of the player, the overall readiness that the player seeks help, recency of playing the gaming application by the player, game difficulty selected by the user5when playing the game, game level, character attributes, character location, number of lives left, the total possible number of lives available, armor, trophy, time counter values, and other asset information, etc. User/player saved data may also include user profile data that identifies player5, for example. User/player saved data is stored in database141. In one implementation, the game context is related to snapshot information that provides information enabling execution of an instance of the video game beginning from a point in the video game associated with a corresponding snapshot. Access to a particular snapshot that is captured during game play of a player, and that is stored allows another instance of the gaming application to be executed using information in the snapshot, such as game state and possibly user information relating to the previously described game context. For example, another user is able to jump into a parallel version of the game play associated with the snapshot. A full discussion on the creation and use of snapshots is provided within U.S. application Ser. No. 15/411,421, entitled “Method And System For Saving A Snapshot of Game Play And Used To Begin Later Execution Of The Game Play By Any User As Executed On A Game Cloud System,” which is incorporated by reference in its entirety. In one embodiment, the snapshot includes a snapshot image of the scene that is rendered at that point. The snapshot image is stored in snapshot image database146. The snapshot image may be presented in the form of a thumbnail with respect to a timeline, wherein the snapshots provide various views into the game play of a user at corresponding points in the progression by the user through a video game as indicated by the timeline. The timeline can be used to replay a certain portion (e.g., last 2 minutes) of the player's game play to provide situational awareness to the expert when providing assistance. The replay portion may be sped up. After the replay portion is shown, live game play is then shown to the expert. In addition, a player profile that includes information related to the corresponding player may be generated and stored in profile database143. Profile information may include name, age, residence, account information, user related information from game context (e.g., user saved data stored in database141), etc. The player/expert gaming profile generator121is configured to create and manage the player profile. Game processor201includes help session controller120to facilitate the establishing and managing of a help session that provides real-time assistance during game play of a player playing a gaming application, such as by connecting the player to an expert over a communication session. The help session controller120may control one or more components to establish and manage the help session, including for example the expert matching engine123, pre-help session matching engine124, share screen controller126, share play controller, and others. For example, when a player requests help, such as through a query (e.g., “How do I beat Boss-A?” or “I need help-NOW!”), the help session controller120is configured to connect that player with an expert over a communication session supporting the help session so that the expert can provide assistance. In particular, game processor201includes expert matching engine123that in cooperation with the help session controller120is configured for matching the player to the expert based on game contexts for the player and the selected expert. That is, in order to connect the player to an expert who can help, critical data about the player's current session is captured, such as quest, level, loadout, location, skills, etc., which can be defined as game criteria, which includes game contexts previously described. Specifically, the matching process focuses on game criteria and/or thresholds when selecting the expert. Game criteria can be game context information, including game state and user/player saved data previously described, particular standards set by the player (e.g., only wants the best experts—5 star expert), expert availability, etc. For example, the game criteria is used to pair the player with an expert who has similar experiences with the gaming application based on the game criteria (e.g., weighting particular pieces of information). Game criteria may include threshold information to filter the pool of experts to a manageable set. For example, the threshold may be a minimum quality standard (e.g., expert rating, valuation, etc.), or recency of playing the gaming application so that the expert can provide the freshest assistance that is not encumbered with lack of immediate recall. Ideally, based on the game criteria, the expert has already beaten that part of the game (e.g., which the player is currently playing and needs assistance), and ideally who did it with a similar configuration. An expert is generally a player who has registered and/or qualified as an expert. In one implementation, any player can register as an expert after at least playing a portion of the corresponding gaming application. In another implementation, a player can only register as an expert after reaching a qualification standard. For example, the qualification may be given to a player that is an expert of other games, or when a player has played the subject gaming application with high skill, or when a player achieves a certain task or quest identified as being a qualification standard (e.g., qualification boss, intermediate boss, end boss, etc.). Other qualification methods are supported. The player/expert gaming profile generator121is configured to create and manage the expert profile. Expert registration and profile information may be stored in database147. In one embodiment, when a player requests help for that game, the request is sent to the available experts who are most likely to be able to help. That is, the pool of experts are filtered to determine a set of experts that have similar game contexts as the player. In one implementation, the first expert from the filtered set to accept the help request is selected as the expert providing assistance. In that case, a help session is established between the player and the selected expert. In another implementation, the selected expert is the one who has the highest match based on the game criteria including game context, thresholds, ratings, etc. During the help session, the expert can provide guidance via text or voice, from a web, mobile, or console interface. In one embodiment, to better help the player, the expert can request to spectate the player's screen—such as through a Share Screen (or ShareScreen) functionality. The expert can then watch a stream of the player's game (e.g., video), providing guidance during the game play. The Share Screen functionality is managed through the share screen controller126in cooperation with the help session controller120. If the player is unable to complete a given objective (e.g., task, quest, etc.) himself with or without expert assistance, the expert can ask the player to share his controller, such as in a Share Play or SharePlay configuration that is configured to transfer control of the gaming application to the expert, in one implementation. In another case, the player may actively request the expert to take over control of the game play. In either case, the expert can then control the player's game (e.g., the game play) remotely via SharePlay or any similar functionality. As such, the expert is able to complete the objective for the player. At any point, control can be passed back to the player. For instance, the player may have a master position (e.g., kill switch) that when activated by the player switches control back to the player. As an example, the player may decide that the expert is going beyond what is agreed upon (e.g., expert playing beyond the objective), or may decide that he or she would like another go at the objective. Also, at any point (during or afterwards) the expert can always pass control back to the player. The SharePlay functionality is managed through the share play controller127in cooperation with the help session controller120. Game processor201includes a ratings manager151that is configured to store ratings and/or rankings of experts. For example, at the end of a help session, the player can rate the quality of the expert's help along a variety of metrics (helpful, friendly, knowledgeable, etc.). These ratings can be fed back into the system, for purposes of connecting players to the highest-quality help available, as per the ratings. For example, the ratings may be specific to a particular gaming application. In one implementation, experts that are rated with the highest quality (e.g., “5-star help”) is only made available to players who have a subscription to a gaming service, such as SONY PlayStation Plus membership that provides access to digital games (free or by fee), cloud storage, discounts, online multi-player gaming, etc. Subscription access to qualified experts and management of membership and benefits are managed by the subscription help session manager152. As another feature of the help session manager152, highly-rated experts may be eligible to have their own “professional help” video channels through the help service. In that manner, those highly-rated experts can monetize their help (e.g., through subscription or fee services) during a help session. Further, each help session may be recorded, and stored in the help session database149. For example, the help session controller is configured for recording and storing a corresponding help session. As such, instead of connecting a player to an expert for a live help session, the player may be connected with a recorded help session that is directed to the specific query presented by the player. In some cases, the recorded help session has a higher rating over any available live help session. For example, when players in the future seek help for a previously encountered and similar situation, recorded help sessions providing assistance for those situations can be returned. This will make help available even when live experts aren't available. Also, recorded help sessions that provide the best assistance may be preferred over live help sessions, as described below. FIG.7illustrates a graph700showing the availability of live help sessions and the availability of recorded help sessions throughout the life of the gaming application, in accordance with one embodiment of the present disclosure. In particular, the y-axis shows unit volume, such as the number of experts or recorded session available at any point in time during the life of a gaming application. For instance, the x-axis shows a time period, such as from a release of a gaming application out to beyond 14 months from the release date. Line720shows the availability of live help sessions as provided by experts. Line710shows the availability of recorded help sessions, wherein recorded help sessions provide assistance for the particular gaming application that is of interest to the player's query or request for help. As shown, during the early life (0-7 months) of a gaming application, live help sessions are readily available, as the gaming application is relatively new and interest in the gaming application is high amongst gamers. However, after 7 months, interest in the gaming application steadily wanes, as players and/or experts move on to play other gaming applications. On the other hand, recorded help sessions as shown by line710may plateau around the 7 month period. That is, lines710and720may closely track each other in the first 6 months showing that the availability of live help and recorded help is approximately equal. As live help diminishes, recorded help may be provided to players making requests for assistance. Lines710and720inFIG.7show exemplary patterns of availability. For instance, the availability of recorded help sessions shown in line710may purposefully follow the availability of live help sessions shown in line720through the early life (e.g., 0-6 months) of a gaming application. In particular, the number of recorded help sessions shown in line710may be controlled by the help session database filter156. Recorded help sessions may be periodically purged from database149based on various criteria. For example, as the number of help sessions grow, better quality help sessions may be recorded based on ratings. As such, the lesser quality help sessions may be purged. In addition, after a period of time, the filter156may retain only a specific number of help sessions. For example, for a particular query, filter156may decide to store only one or two recorded help sessions directed to that query. Other help sessions directed to that query may be purged. In that manner, the database149of recorded help sessions may be managed to store a limited amount of help sessions that are of high quality (e.g., in terms of providing assistance). This may result in a decline in the number of recorded help sessions over time, as shown beyond ten months inFIG.7. In addition, because the number of recorded help sessions is managed, searching of the database149is more efficient. For example, searches conducted by the recorded help session matching engine124match a query or request for help to a recorded help session before a live help session is matched or presented. In some cases, a recorded help session provides the highest quality assistance for a particular query, and a live help session is unnecessary. In that case, connection to the recorded help is more efficient. These recorded help sessions may be tagged using the help session tagger154with information related to a specific query. In that manner, a recorded help session may be tagged so that a match between a corresponding query and the help session can be determined. For example, when a query is presented by a player, a recorded help session that may provide direct assistance for that query may be found by searching for an appropriate tag (e.g., identifying a related query) in the help session database149. Game processor201includes an expert incentive engine153that is configured to attract players to register as experts. Experts may need some incentive to participate in help sessions. Incentives may be different than qualifications previously described. A qualification standard may be set so that only qualified players may register as experts. However, once a player qualifies, there is no guarantee that the player will register. An incentive may provide encouragement to a player to register as an expert. For instance, rewards may be given to registered experts. These rewards may come in various forms. In one case, the reward may be the release of a particular part of the gaming application made only available to experts. The release may be an object, or region of the game, or specific task, or specific quest that are made available only to registered experts. For instance, the release may come in the form of downloadable content (DLC). In addition, the reward may come in the form of a trophy or expert points, both of which may be used as a comparison to other experts. For example, a competition may exist between two friends to see who has more expert points, or more trophies. Game processor201includes a spoiler alert controller150. During a live help session, there is a danger that the expert may reveal too much when providing assistance. That is, the expert may reveal information that spoils a game for the player. Typically, the player is unaware of the pertinent information qualifying as spoiling information. Examples of spoiling may include a name of a boss that occurs at the end of the level, but the player is only midway through the level; an object that is the ultimate goal of the level or the entire gaming application; the name of a place in the gaming environment; name of a quest; name of an object, or character that has not been encountered yet, etc. The spoiler alert controller150may manage a toggle feature that when “ON” notifies the expert that the player is sensitive to spoiling information, and when “OFF” notifies the expert that the player is less sensitive and probably does not mind if spoiling information is released. Spoiler alert controller150may be configured to automatically detect spoiling information, such as through key word identification. The key words may be stored in database148. Upon identification of the key word, that information may be masked before presentation to the player (e.g., masking text, or muting the pertinent audio, etc.). A slight time delay may be introduced to allow for masking. In addition, the spoiler alert controller150may notify the player that the expert is about to reveal spoiling information, such as in the form of a spoiler alert. The player may then give additional instructions, such as providing authorization to reveal the information, or to deny the revealing of the information. Game processor201includes a help session highlight generator155that is configured to generate a highlight reel of a recorded help session. Highlights may be identified through active motion of one or more objects (e.g., character) in the game play as presented in the recorded help session. Periods of inactivity may indicate that no significant assistance is being provided. Both the highlight reel and the full version of the recorded help session may be stored in database149. When that recorded help session is selected in response to a query made by a player in the future, the highlight reel of the recorded help session may be first presented to the requesting player. If requested, the full version may also be presented. In one implementation, the full version is downloaded while the highlight reel is being played in anticipation of being requested. In that manner, the full version may immediately be played upon request. In another implementation, the full version is first presented with the option of presenting the highlight reel. For example, the full version may be preceded with a notification that the most pertinent section (e.g., where the assistance is given) begins at 2 minutes into the 7 minute help session. The requesting player may be presented with an option to play the highlight reel at that time. In one embodiment, the help session may be delivered to a device11(e.g., tablet) for display and interaction, wherein device11may be separate from client device100that is configured to execute and/or support execution of the gaming application for user5interaction. For instance, a first communication channel may be established between the game server205and client device100, and a separate, second communication channel may be established between game server205and device11to deliver the help session. FIG.1Billustrates a system106B providing real-time assistance during game play of a player playing a gaming application through a help session, such as connecting the player to the expert over a communication session or by providing a recorded help session, wherein the gaming application is executing locally to the corresponding player, and wherein back-end server support (e.g., accessible through game server205) may implement the establishing and managing of a help session. In one embodiment, system106B works in conjunction with system10ofFIG.1Aand system200ofFIG.2to provide real-time assistance to a player through a live or recorded help session through the help session controller120at the game-cloud system210, as previously described inFIG.1A. Referring now to the drawings, like referenced numerals designate identical or corresponding parts. As shown inFIG.1B, a plurality of players115(e.g., player5A, player5B . . . player5N) is playing a plurality of gaming applications, wherein each of the gaming applications is executed locally on a corresponding client device100(e.g., game console) of a corresponding user. At least one of the plurality of players115is an expert190. The system106B supports game play by the plurality of players115at one or more moments in time, such as over a period of time. In addition, each of the plurality of players115has access to a device11, previously introduced, configured to receive information providing real-time assistance during game play of a player playing a gaming application through a help session, as previously described. Each of the client devices100may be configured similarly in that local execution of a corresponding gaming application is performed. For example, player5A may be playing a first gaming application on a corresponding client device100, wherein an instance of the first gaming application is executed by a corresponding game title execution engine111. Game logic126A (e.g., executable code) implementing the first gaming application is stored on the corresponding client device100, and is used to execute the first gaming application. For purposes of illustration, game logic may be delivered to the corresponding client device100through a portable medium (e.g., flash drive, compact disk, etc.) or through a network (e.g., downloaded through the internet150from a gaming provider). In addition, player5B is playing a second gaming application on a corresponding client device100, wherein an instance of the second gaming application is executed by a corresponding game title execution engine111. The second gaming application may be identical to the first gaming application executing for player5A or a different gaming application. Game logic126B (e.g., executable code) implementing the second gaming application is stored on the corresponding client device100as previously described, and is used to execute the second gaming application. Further, player115N is playing an Nth gaming application on a corresponding client device100, wherein an instance of the Nth gaming application is executed by a corresponding game title execution engine111. The Nth gaming application may be identical to the first or second gaming application, or may be a completely different gaming application. Game logic126N (e.g., executable code) implementing the third gaming application is stored on the corresponding client device100as previously described, and is used to execute the Nth gaming application. In addition, expert190at some point may have been playing at least one gaming application in system106B and has registered as an expert. For example, the expert may be playing a corresponding gaming application with cooperation of the client device100having game logic126X and a game title execution engine111, as previously described. In that manner, a player currently playing a gaming application in system106B may request through the help session controller120, as previously described, and be connected with the expert190that when selected may provide assistance for the game play of the requesting player. When providing assistance, expert190may need not be supported by client device100, and may participate in the corresponding help session using any device, such as device11(e.g., smartphone) or HMD102. As previously described, client device100may receive input from various types of input devices, such as game controllers, tablet computers, keyboards, gestures captured by video cameras, mice touch pads, etc. Client device100can be any type of computing device having at least a memory and a processor module that is capable of connecting to the game server205over network150. Also, client device100of a corresponding player is configured for generating rendered images executed by the game title execution engine111executing locally or remotely, and for displaying the rendered images on a display. For example, the rendered images may be associated with an instance of the first gaming application executing on client device100of player5A. For example, a corresponding client device100is configured to interact with an instance of a corresponding gaming application as executed locally or remotely to implement a game play of a corresponding player, such as through input commands that are used to drive game play. In one embodiment, client device100is operating in a single-player mode for a corresponding player that is playing a gaming application. Back-end server support via the game server205may provide assistance supporting game play of a corresponding player, such as connecting the player to a live or recorded help session with an expert providing assistance, as will be described below, in accordance with one embodiment of the present disclosure. In another embodiment, multiple client devices100are operating in a multi-player mode for corresponding players that are each playing a specific gaming application. In that case, back-end server support via the game server may provide multi-player functionality, such as through the multi-player processing engine119. In particular, multi-player processing engine119is configured for controlling a multi-player gaming session for a particular gaming application. For example, multi-player processing engine130communicates with the multi-player session controller116, which is configured to establish and maintain communication sessions with each of the users and/or players participating in the multi-player gaming session. In that manner, players in the session can communicate with each other as controlled by the multi-player session controller116. Further, multi-player processing engine119communicates with multi-player logic118in order to enable interaction between users within corresponding gaming environments of each user. In particular, state sharing module117is configured to manage states for each of the users in the multi-player gaming session. For example, state data may include game state data that defines the state of the game play (of a gaming application) for a corresponding user at a particular point. For example, game state data may include game characters, game objects, game object attributes, game attributes, game object state, graphic overlays, etc. In that manner, game state data allows for the generation of the gaming environment that exists at the corresponding point in the gaming application. Game state data may also include the state of every device used for rendering the game play, such as states of CPU, GPU, memory, register values, program counter value, programmable DMA state, buffered data for the DMA, audio chip state, CD-ROM state, etc. Game state data may also identify which parts of the executable code need to be loaded to execute the video game from that point. Game state data may be stored in database140ofFIG.1AandFIG.2, and is accessible by state sharing module117. Further, state data may include user saved data that includes information that personalizes the video game for the corresponding player. This includes information associated with the character played by the user, so that the video game is rendered with a character that may be unique to that user (e.g., location, shape, look, clothing, weaponry, etc.). In that manner, the user saved data enables generation of a character for the game play of a corresponding user, wherein the character has a state that corresponds to the point in the gaming application experienced currently by a corresponding user. For example, user saved data may include the game difficulty selected by a corresponding user115A when playing the game, game level, character attributes, character location, number of lives left, the total possible number of lives available, armor, trophy, time counter values, etc. User saved data may also include user profile data that identifies a corresponding user115A, for example. User saved data may be stored in database140. In that manner, the multi-player processing engine119using the state sharing data117and multi-player logic118is able to overlay/insert objects and characters into each of the gaming environments of the users participating in the multi-player gaming session. For example, a character of a first user is overlaid/inserted into the gaming environment of a second user. This allows for interaction between users in the multi-player gaming session via each of their respective gaming environments (e.g., as displayed on a screen). In addition, back-end server support via the game server205may provide support services including providing real-time assistance during game play of a player playing a gaming application through a help session. As previously introduced, the help session controller120is configured to establish and manage one or more help sessions that provide assistance. For example, the controller120is configured to connect a requesting player to an expert over a communication session that is established to support the help session. The help session may be live with an expert providing live assistance, or the help session may be previously recorded. FIG.1Cillustrates a system106C providing gaming control to a plurality of players115(e.g., players5L,5M . . .5Z) playing a gaming application as executed over a cloud game network, in accordance with one embodiment of the present disclosure. In some embodiments, the cloud game network may be a game cloud system210that includes a plurality of virtual machines (VMs) running on a hypervisor of a host machine, with one or more virtual machines configured to execute a game processor module utilizing the hardware resources available to the hypervisor of the host. In one embodiment, system106C works in conjunction with system10ofFIG.1Aand/or system200ofFIG.2to provide real-time assistance during game play of a player playing a gaming application through a help session, such as connecting the player to the expert over a communication session or by providing a recorded help session. Referring now to the drawings, like referenced numerals designate identical or corresponding parts. As shown, the game cloud system210includes a game server205that provides access to a plurality of interactive video games or gaming applications. Game server205may be any type of server computing device available in the cloud, and may be configured as one or more virtual machines executing on one or more hosts. For example, game server205may manage a virtual machine supporting a game processor that instantiates an instance of a gaming application for a user. As such, a plurality of game processors of game server205associated with a plurality of virtual machines is configured to execute multiple instances of the gaming application associated with game plays of the plurality of users115. In that manner, back-end server support provides streaming of media (e.g., video, audio, etc.) of game plays of a plurality of gaming applications to a plurality of corresponding users. A plurality of players115accesses the game cloud system210via network150, wherein players (e.g., players5L,5M . . .5Z) access network150via corresponding client devices100′, wherein client device100′ may be configured similarly as client device100ofFIGS.1A-1B(e.g., including game executing engine111, etc.), or may be configured as a thin client providing that interfaces with a back end server providing computational functionality (e.g., including game executing engine211). In addition, each of the plurality of players115has access to a device11, previously introduced, configured to facilitate a help session that connects a player to an expert over a communication session or by connecting to a recorded help session, as previously described. In particular, a client device100′ of a corresponding player5L is configured for requesting access to gaming applications over a network150, such as the internet, and for rendering instances of gaming application (e.g., video game) executed by the game server205and delivered to a display device associated with the corresponding player5L. For example, player5L may be interacting through client device100′ with an instance of a gaming application executing on game processor of game server205. More particularly, an instance of the gaming application is executed by the game title execution engine211. Game logic (e.g., executable code) implementing the gaming application is stored and accessible through data store140, previously described, and is used to execute the gaming application. Game title processing engine211is able to support a plurality of gaming applications using a plurality of game logics177, as shown. In addition, expert190′ at some point may have been playing at least one gaming application in system106B and has registered as an expert. For example, the expert190′ may be playing a corresponding gaming application with cooperation of the client device100′, as previously described. In that manner, a player currently playing a gaming application in system106C may request through the help session controller120, as previously described, and be connected with the expert190′ that when selected may provide assistance for the game play of the requesting player. When providing assistance, expert190′ may need not be supported by client device100′, and may participate in the corresponding help session using any device, such as device11(e.g., smartphone) or HMD102. As previously described, client device100′ may receive input from various types of input devices, such as game controllers, tablet computers, keyboards, gestures captured by video cameras, mice touch pads, etc. Client device100′ can be any type of computing device having at least a memory and a processor module that is capable of connecting to the game server205over network150. Also, client device100′ of a corresponding player is configured for generating rendered images executed by the game title execution engine211executing locally or remotely, and for displaying the rendered images on a display. For example, the rendered images may be associated with an instance of the first gaming application executing on client device100′ of player5L. For example, a corresponding client device100′ is configured to interact with an instance of a corresponding gaming application as executed locally or remotely to implement a game play of a corresponding player, such as through input commands that are used to drive game play. In another embodiment, multi-player processing engine119, previously described, provides for controlling a multi-player gaming session for a gaming application. In particular, when the multi-player processing engine119is managing the multi-player gaming session, the multi-player session controller116is configured to establish and maintain communication sessions with each of the users and/or players in the multi-player session. In that manner, players in the session can communicate with each other as controlled by the multi-player session controller116. Further, multi-player processing engine119communicates with multi-player logic118in order to enable interaction between players within corresponding gaming environments of each player. In particular, state sharing module117is configured to manage states for each of the players in the multi-player gaming session. For example, state data may include game state data that defines the state of the game play (of a gaming application) for a corresponding player115A at a particular point, as previously described. Further, state data may include user/player saved data that includes information that personalizes the video game for the corresponding player, as previously described. For example, state data includes information associated with the user's character, so that the video game is rendered with a character that may be unique to that user (e.g., shape, look, clothing, weaponry, etc.). In that manner, the multi-player processing engine119using the state sharing data117and multi-player logic118is able to overlay/insert objects and characters into each of the gaming environments of the users participating in the multi-player gaming session. This allows for interaction between users in the multi-player gaming session via each of their respective gaming environments (e.g., as displayed on a screen). In addition, back-end server support via the game server205may provide support services including providing real-time assistance during game play of a player playing a gaming application through a help session. As previously introduced, the help session controller120is configured to establish and manage one or more help sessions that provide assistance. For example, the controller120is configured to connect a requesting player to an expert over a communication session that is established to support the help session. The help session may be live with an expert providing live assistance, or the help session may be previously recorded. FIG.2illustrates a system diagram200for enabling access and playing of gaming applications stored in a game cloud system (GCS)210, in accordance with an embodiment of the disclosure. Generally speaking, game cloud system GCS210may be a cloud computing system operating over a network220to support a plurality of users. Additionally, GCS210is configured to provide real-time assistance during game play of a player playing a gaming application by connecting the player to an expert over a communication session supporting a live help session, or by connecting the player to a recorded help session. For example, help session controller120is configured for establishing and managing help sessions. In addition, with cooperation of the help session controller120, the communication session controller is configured for generating and managing communication sessions between players and experts over one or more help sessions. Also, GCS210is configured to capture and save game context information that is used to match a player requesting help to an expert that is best suited for providing assistance. For example, the expert may have recently played the same section of the gaming application using the same type of character with the same weaponry or set of assets. In one embodiment, the game context is captured based on snapshots that are generated during the game plays, as previously described. For example, snapshot generator212may be configured for generating and/or capturing snapshots of game plays of one or more users playing one or more gaming applications. One or more user devices may be connected to network220to allow players to access services provided by GCS210and social media providers240. In one embodiment, game cloud system210includes a game server205, a video recorder271, a tag processor273, and account manager274that includes a user profile manager, a game selection engine275, a game session manager285, user access logic280, a network interface290, and a social media manager295. GCS210may further include a plurality of gaming storage systems, such as a game state store, random seed store, user saved data store, snapshot store, which may be stored generally in datastore140. Other gaming storage systems may include a game code store261, a recorded game store262, a tag data store263, video game data store264, and a game network user store265. In one embodiment, GCS210is a system that can provide gaming applications, services, gaming related digital content, and interconnectivity among systems, applications, users, and social networks. GCS210may communicate with user device230and social media providers240through social media manager295via network interface290. Social media manager295may be configured to relate one or more friends. In one embodiment, each social media provider240includes at least one social graph245that shows user social network connections. User/player5is able to access services provided by GCS210via the game session manager285. For example, account manager274enables authentication and access by player5to GCS210. Account manager274stores information about member players. For instance, a user profile for each member user may be managed by account manager274. In that manner, member information can be used by the account manager274for authentication purposes. For example, account manager2274may be used to update and manage user information related to a member user/player. Additionally, game titles owned by a member player may be managed by account manager274. In that manner, gaming applications stored in data store264are made available to any member player who owns those gaming applications. In one embodiment, a user, e.g., player5, can access the services provided by GCS210and social media providers240by way of user device230through connections over network220. User device230can include any type of device having a processor and memory, wired or wireless, portable or not portable. In one embodiment, user device230can be in the form of a smartphone, a tablet computer, or hybrids that provide touch screen capability in a portable form factor. One exemplary device can include a portable phone device that runs an operating system and is provided with access to various applications (apps) that may be obtained over network220, and executed on the local portable device (e.g., smartphone, tablet, laptop, desktop, etc.). User device230includes a display232that acts as an interface for player5to send input commands236and display data and/or information235received from GCS210and social media providers240. Display232can be configured as a touch-screen, or a display typically provided by a flat-panel display, a cathode ray tube (CRT), or other device capable of rendering a display. Alternatively, the user device230can have its display232separate from the device, similar to a desktop computer or a laptop computer. Additional devices231(e.g., device11ofFIG.1A) may be available to player5for purposes of providing real-time assistance in support of game play of a player. In one embodiment, user device130is configured to communicate with GCS210to enable player5to play a gaming application. In some embodiments, the GCS210may include a plurality of virtual machines (VMs) running on a hypervisor of a host machine, with one or more virtual machines configured to execute a game processor module utilizing the hardware resources available to the hypervisor of the host. For example, player5may select (e.g., by game title, etc.) a gaming application that is available in the video game data store264via the game selection engine275. The gaming application may be played within a single player gaming environment or in a multi-player gaming environment. In that manner, the selected gaming application is enabled and loaded for execution by game server205on the GCS210. In one embodiment, game play is primarily executed in the GCS210, such that user device230will receive a stream of game video frames235from GCS210, and user input commands236for driving the game play is transmitted back to the GCS210. The received video frames235from the streaming game play are shown in display232of user device230. In other embodiments, the GCS210is configured to support a plurality of local computing devices supporting a plurality of users, wherein each local computing device may be executing an instance of a gaming application, such as in a single-player gaming application or multi-player gaming application. For example, in a multi-player gaming environment, while the gaming application is executing locally, the cloud game network concurrently receives information (e.g., game state data) from each local computing device and distributes that information accordingly throughout one or more of the local computing devices so that each user is able to interact with other users (e.g., through corresponding characters in the video game) in the gaming environment of the multi-player gaming application. In that manner, the cloud game network coordinates and combines the game plays for each of the users within the multi-player gaming environment. In one embodiment, after player5chooses an available game title to play, a game session for the chosen game title may be initiated by the user Uo through game session manager285. Game session manager285first accesses game state store in data store140to retrieve the saved game state of the last session played by the user Uo (for the selected game), if any, so that the player5can restart game play from a previous game play stop point. Once the resume or start point is identified, the game session manager285may inform game execution engine in game processor201to execute the game code of the chosen game title from game code store261. After a game session is initiated, game session manager285may pass the game video frames235(i.e., streaming video data), via network interface290to a user device, e.g., user device230. During game play, game session manager285may communicate with game processor201, recording engine271, and tag processor273to generate or save a recording (e.g., video) of the game play or game play session. In one embodiment, the video recording of the game play can include tag content entered or provided during game play, and other game related metadata. Tag content may also be saved via snapshots. The video recording of game play, along with any game metrics corresponding to that game play, may be saved in recorded game store262. Any tag content may be saved in tag data stored263. During game play, game session manager285may communicate with game processor201of game server205to deliver and obtain user input commands236that are used to influence the outcome of a corresponding game play of a gaming application. Input commands236entered by player5may be transmitted from user device230to game session manager285of GCS210. Input commands236, including input commands used to drive game play, may include user interactive input, such as including tag content (e.g., texts, images, video recording clips, etc.). Game input commands as well as any user play metrics (how long the user plays the game, etc.) may be stored in game network user store. Select information related to game play for a gaming application may be used to enable multiple features that may be available to the user. Because game plays are executed on GCS210by multiple users, information generated and stored from those game plays enable any requesting user to experience the game play of other users, particularly when game plays are executed over GCS210. In particular, snapshot generator212of GCS210is configured to save snapshots generated by the game play of users playing gaming applications through GCS210. In the case of player5, user device provides an interface allowing player5to engage with the gaming application during the game play. Snapshots of the game play by user Uo is generated and saved on GCS210. Snapshot generator212may be executing external to game server205as shown inFIG.2, or may be executing internal to game server205. In addition, the information collected from the game plays of players and experts may be used to match a player to an expert when the player is requesting help. In that manner, the expert is best able to provide assistance to the player given a particular game context experienced by the player, wherein the expert is selected from a pool of experts. For example, the selected expert may have played the same gaming application using the same character, and using the same assets (e.g., weapons, etc.), using approximately the same skills, etc. In addition, the expert may have recently played the same level so that the gaming application is fresh in the mind of the expert. Because the expert has recently played the gaming application, this may reduce the chance of the expert revealing any spoilers, as the expert may not have had a chance to experience any spoiler information. In implementations, the help session may be delivered over a network220to the user device231or user device230for establishing the communication session of the help session (e.g., voice, text, video, etc.). For example, the help session may be presented to user device230(e.g., display connected to a gaming console or client device). In another example, the help session may be presented to user device231used in establishing a communication session (e.g., providing text, audio, video, etc.). User device231may be a mobile device (e.g., smartphone), such as a device used by an expert during a help session. In that case, the expert need not have access to a gaming console or client device as the expert is not playing the gaming application per se. FIGS.3-8are described within the context of a user playing a gaming application. In general, the gaming application may be any interactive game that responds to user input. In particular,FIGS.3-8describe how a player playing a gaming application may receive real-time assistance either through connecting with an expert over a live help session, or by providing access to a recorded help session. With the detailed description of the various modules of the gaming server and client device communicating over a network, a method for providing gaming assistance supporting game play of a corresponding player is now described in relation to flow diagram300A ofFIG.3A, in accordance with one embodiment of the present disclosure. Flow diagram300A illustrates the process and data flow of operations involved at the game server side for purposes of connecting a player playing a gaming application to an expert providing assistance (e.g., through a live help session) or to a recorded help session. The help session may be transmitted to a device of the player that may be separate from another device displaying the game play of the player playing a gaming application. In particular, the method of flow diagram300A may be performed at least in part by the help session controller120ofFIGS.1A-1C and2. At310, the method includes receiving over a network at a back-end server information related to a plurality of game plays of a plurality of players for a gaming application. The players may be currently playing the gaming application, or have played the gaming application. In some embodiments, the information includes the game plays. In some embodiments, the information includes metadata and/or information generated relating to the game play, such as game state data. For example, the information may include game state information and user/player saved information, as previously described. The information may include snapshot information that could provide information enabling execution of an instance of the video game beginning from a point in the video game associated with a corresponding snapshot. For example, the game state information may define the state of the game play at a corresponding point, to include character information (e.g., type, race, etc.), the gaming application, where the character is located, what level is being played, assets of the character, game objects, game object attributes, game attributes, game object state, graphic overlays, character assets, skill set of character, geographic location of character in gaming environment/world, the current quest and/or task presented to the player, loadout, skills set of the character, etc. The game state data allows for generation of the gaming environment that existed at the corresponding point in the game play. Further, user/player information that related to the player may include information that personalizes the video game for the corresponding player, such as skill or ability of the player, the overall readiness that the player seeks help, recency of playing the gaming application by the player, game difficulty selected by the user5when playing the game, game level, character attributes, character location, number of lives left, the total possible number of lives available, armor, trophy, time counter values, and other asset information, etc. At320, the method includes determining from the information a current game context of a first game play of a first player. The first game context is related to the current state of the game play of the first player. Specifically, information is received relating to a current game play of a first player. In one case, the current game play is live, such that the first player is currently playing the gaming application. Game context defines the gaming environment at a particular point in the game play. A current game context defines the gaming environment at a current point in a corresponding game play. Game contexts may be defined for one or more points in a corresponding game play. For example, the game context may define the character of a player, the various characteristics of that player, the assets associated with that player, the tasks presented to the player, etc. The game context may be based or closely related to the previously received metadata and/or information generated relating to the game play. At330, the method includes determining from the information a plurality of historical expert game contexts of a plurality of expert game plays of experts that have played the gaming application. In one implementation, an expert may also be currently playing the gaming application and generating new historical expert game contexts through the corresponding game play. The expert game plays are generated from players classified as experts for the gaming application. As previously described, generally game contexts may be defined for one or more points in a corresponding game play, such as those for one or more experts. The expert game plays are taken from the plurality of game plays, and specifically from game plays of players classified as experts. A player may be classified through self-registration, through qualification, or through any other method. In one embodiment, the expert game contexts have been simultaneously determined when determining game contexts of the plurality of game plays of all the players. As such, once a player is classified as an expert, the game context information of the corresponding game player of the expert can be identified as one or the expert game contexts. In addition, the game context information may be determined for multiple points during the corresponding game play. For example, game context information for a first expert may include first game context at a first point in the game play, second game context at a second point in the game play . . . and Nth game context at an Nth point in the game play. For example, the game play for a corresponding expert may have a plurality of game contexts, including game contexts for facing a boss at level1, facing a boss at level2, progress within a given side quest, etc. When multiple players have been classified as experts, the game context information for each expert may be determined. Classified experts for a particular gaming application make up a set of the plurality of players. As previously described, the experts may be self-registered, such as without any qualifying criteria. In another implementation, the experts may have some qualification, such as skill of player, accomplishing a task, finishing a quest, finishing a portion of the game within a time period, finishing the game within a time period, etc., as previously described. After reaching the qualification, the expert may self-register, and/or may automatically be labeled as an expert (e.g., with authorization). Different players and/or experts playing the same gaming application may have the same or similar game contexts within their corresponding game plays. For example, by collecting game contexts of multiple players all playing the same gaming application, game plays of different players may be aligned as having similar characters with the same assets, similar playing styles of different players, similar routing through the gaming world of a gaming application, etc. Game context information may be used to match a player with another player that is classified as an expert (e.g., self-registration, qualified, etc.), such that the expert is able to provide assistance in the game play of the player requesting the assistance, as will be described below. At340, the method incudes receiving an assistance query related to the first game play. That is, the first player is also making a request for assistance, or making a request notification, etc. For example, the query may be specifically directed to how to beat a particular point in the game (e.g., level boss, quest, task, etc.), or may be directed to gaining information about an object (e.g., a boss's name, an object encountered in the game play, or may be directed to an overall objective for the player at this point in the gaming application. In addition, the current game context of the first player is related to the state of the game that is closest to the point in the game play from which the request is made. For example, the game context may provide information relating to the character of the first player, the assets held by the character, the level in the gaming application encountered by the character, and the scene in the level. Any query or request for assistance by the first player would necessarily be related to the current game context. As such, the game play of another player (e.g., a classified expert, friend, etc.) that has a game context that closely matches the current game context of the first player may have knowledge of the gaming application that is helpful to the first player. At350, the method includes comparing the current game context of the first player (requesting help) to the plurality of historical expert game contexts to see how closely the expert matches the first player, such as in relation to game context of respective game players. That is, the comparison determines how closely the game play of each expert matches the game play of the first player. In one embodiment, the comparison is performed for each game context captured for a particular expert, and the closest game context to the first game context of the first player is used as being representative of that expert. In another embodiment, the game context information collected at various points during the game play of a particular expert may be combined and used for comparison to the first game context of the first player. In one implementation, at least one expert is determined having a corresponding historical expert game context that matches the first game context. At360, the method includes assigning to the first player a first expert for obtaining assistance. That is, the first expert can then provide assistance to the first player in relation to his or her game play. Various methods of selection can be implemented for purposes of selecting the first expert from the pool of experts. For example, the first expert is selected based on the game contexts of the first player and experts in the set/pool of experts. In one embodiment, the first expert is selected based on the quality of the matching between game contexts. For instance, the set of matched expert game contexts has matching values indicating the quality of matching the corresponding expert game context to the first game context, as will be further described in relation toFIG.3B. For example, the method determines a first matching value having the highest value. The first matching value corresponds to an expert game context. In one case, the first matching value corresponds to the first expert game context of the first expert. As such, the first expert is selected for the help session, wherein the first expert is best suited from the pool of experts to provide help to the first player, based on game contexts. In another embodiment, the first expert from the pool of experts is selected based on an availability factor. This provides a straightforward approach to matching experts to players requesting help. In particular, this approach may be beneficial when the gaming application is first released. Because of the recent release, there may not be many experts who have registered, and it may be difficult to do any comparisons between experts due to the lack of information. In one implementation, the first available expert is selected and assigned to the first player for the help session. In other embodiments, an expert is selected based on response times, such as in a race to respond from qualified and/or available experts who are most likely to be able to help, as will be further described inFIG.3B. In another embodiment, experts are polled one at a time to determine whether they want to provide assistance. During the polling process, the first expert to respond affirmatively is assigned to provide assistance, as will be further described inFIG.3B. In particular, at370, the method includes generating a communication session that connects the first player and the first expert. In one embodiment, a communication session manager at the back-end server acts as an intermediary for establishing and managing the communication session. At least, the communication session is established between a device of the first expert and a device of the first player. The communication session is used to enable the expert to provide assistance to the player, such as through a help session between the first expert and the first player. In one embodiment, the communication session is configured for text, audio, video, embedded audio and video, etc. For example, the method may include one or more of establishing a voice channel, may include establishing a text channel in the communication session, may include establishing a video channel (e.g., embedded video) configured for a video chat. Also, the communication session manager may act to create new sessions to allow for the different forms of communication, such as providing ShareScreen functionality, SharePlay functionality, etc. In one embodiment, the communication session may be a peer-to-peer connection or may include the back-end server acting as an intermediate node. That is, once created by the communication session manager the communication session is a direct communication path between devices of the first player and the first expert. In another embodiment, the communication session may flow through the back-end server. In one embodiment, the first expert may share the screen of the first player, such as through a share screen functionality, as previously described. By viewing the game play of the first player, the first expert may gain a better sense of the problem facing the first player, and therefore provide better help. The share screen functionality is implemented through the communication session, in one embodiment. The request to share the screen may be made by either the first player or the first expert. For example, the first expert may make a request to share video of the game play of the first player. In one implementation, the request is received by the help session controller at a back-end server. A notification of the request is sent to the device of the first player. For instance, the notification may be delivered from the help session controller. Authorization is received by the help session controller from the device of the first player, wherein the authorization is provided by the first player to share the video of the game play with the expert. As such, the game play of the first player is streamed to the device of the first expert. For example, the help session controller is able to facilitate the streaming through the communication session, or through an independent streaming channel. In another embodiment, the first expert may take control the game play of the first player, such as through a share play functionality, as previously described. By share play, the expert may take over control of the game play, for example to complete an objective that the first player is unable to perform. The request to share play may be made by either the first player or the first expert. For example, a request from the device of the first expert is received, wherein the request asks to share control of the game play of the first player. The request from the expert may be in the form of an offer of assistance from the expert to accomplish the objective within the game play of the user. The request may be received by the help session controller at the back-end server. A notification of the request may be generated by the help session controller, and delivered to the device of the first player from the help session controller. Authorization is received by the help session controller from the device of the first player, wherein the authorization is provided by the first player to share control of the game play with the expert. In that manner, the expert is able to take control of the game play by submitting gaming input commands. In one embodiment, a set of input controls or commands are received by the help session controller from the device of the first expert. A block is placed on input commands from the input controller of the first player, such that the gaming engine (e.g., local console or back-end gaming processor) blocks input commands originating from the controller device of the first player, and passes through input commands originating from the controller device of the first expert. For example, the help session controller may send an instruction to the processor (e.g., gaming engine) executing the gaming application for the game play of the first player to block input controls associated with the first player. As such, the set of input controls from the controller device of the first expert is delivered to the processor (e.g., gaming processor) executing the gaming application for the game play of the first player. In addition, control may be passed back to the first player at any point. For example, the first player may have the ability to take back control the game play at any time (such as, using a kill command), as previously described. FIG.3Bis a flow diagram300B illustrating steps in a method for determining the form of assistance being provided to a player playing a gaming application and requesting assistance, in accordance with one embodiment of the present disclosure. Flow diagram300B illustrates the process and data flow of operations involved at the game server side for purposes of connecting a player playing a gaming application to an expert providing assistance (e.g., through a live help session) or to a recorded help session. The help session may be transmitted to a device of the player that may be separate from another device displaying the game play of the player playing a gaming application. Flow diagram300B may be implemented in cooperation with flow diagram300A, such that flow diagram300B is an extension of flow diagram300A, in one embodiment. In particular, the method of flow diagram300B may be performed at least in part by the help session controller120ofFIGS.1A-1C and2. At350′, the method includes determining a plurality of matching vectors when performing the matching previously described in350. That is, a plurality of matching vectors is determined between the first game context and the plurality of historical expert game contexts. Each matching vector is associated with a corresponding historical expert game context of a corresponding expert. Also, each matching vector having a matching value (e.g., a quality factor or Q-factor) indicating the quality of matching the corresponding historical expert game context to the first game context At351, the method includes determining a set of matched historical expert game contexts having matching values that exceed a threshold. This filters the set/pool of experts to a smaller set of experts that more closely matches the first game context of the first player. Experts in the smaller set, or those whose expert game contexts have matching values that exceed the threshold are better suited to providing help to the first player, given the current context of the game play of the first player. At decision step361, the method determines whether any experts in the smaller set of experts are available to provide assistance in a timely manner (e.g., immediately, in 5 minutes, etc.). For example, there may be much activity in the first 6 months to a year of a gaming application, and experts are readily available to provide fresh and knowledgeable assistance. Beyond that timeframe, the assistance provided by experts may be stale and these experts may be less available. For example, those experts may need some time to come up to speed when providing assistance. If no expert is available to provide live assistance, then the method proceeds to362to determine one or more recorded help sessions having historical expert game contexts that have matching values of vectors that exceed the threshold. For example, the historical expert game contexts may be analyzed as per350′ and351described above. At363, a recorded help session that is best suited for responding to the assistance query of the first player is streamed to the device of the first player. For example, the selected recorded help session may have the highest matching value. On the other hand, if there is an expert available to provide live assistance, the method can take one or more paths for selecting an expert as indicated at the “OR” step369. In one embodiment, an expert is selected based on response times. For example, when a player requests help for that game, the request is sent to the available experts who are most likely to be able to help, such as the previously determined smaller set of experts. In one implementation, at366a broadcast is performed providing notification of the help session that is generated in response to the query from the first player. The notification is broadcast to a plurality of devices of the set of matched experts corresponding to the set of matched expert game contexts (e.g., those meeting the threshold), previously described. In one implementation, at367the first expert to accept the help request is selected and assigned to the help session with the first player. For instance, in a race of responses, determining that a first response to the notification has the shortest response time (e.g., from all the received responses), wherein the first response is received from the first expert. At370′, the responding expert is assigned to the first player for obtaining the assistance. In another embodiment, at364the method includes sending a notification to the next available expert in the smaller set of experts (e.g., having matched historical expert game contexts having matching values that exceed a threshold). The next available expert may be determined based on having the highest matching value of the remaining experts (those not notified) in the smaller set. At decision step365, the method determines if any positive response is received from the next available expert. If not, the method returns to364to resend the notification to the newly selected “next available expert,” as previously described. If yes, the method proceeds to370′ wherein the responding expert is assigned to the first player for obtaining assistance. FIG.4Ais a data flow diagram illustrating the flow of data in a system or method providing real-time assistance during game play of a player playing a gaming application by connecting the player to an expert over a communication session, in accordance with one embodiment of the present disclosure.FIG.4may be representative of the flow of data through the systems and methods ofFIGS.1A-1C and2in embodiments. As shown, player1(P1) is playing a gaming application. Player P1may encounter a roadblock during his or her game play, and request information and/or assistance. For instance, a query from player P1is made through user interface110-P1and delivered through network150back to the help session controller120of a back-end server, as previously described. In particular, the matching engine123in cooperation with the help session controller120is configured to match game contexts of the player P1and a pool of experts440. The pool of experts is taken from a plurality of players410, wherein the players are playing one or more gaming applications. The experts in the pool440all have played the gaming application, and for example are registered as experts of the gaming application. For example, pool440includes one or more experts E1. . . E5. . . E103. . . En. Game contexts420is input into the matching engine123for comparison. For example, the input includes game context420-P1for player P1, game context420-E1for expert E1, game context420-E103for expert E103, game context420-E64for expert E64, game context420-E5for expert E5. . . and game context420-En for expert En. The matching process performed by matching engine was previously described. Basically, the game context420-P1of player P1is compared to each of the game contexts associated with the pool of experts440. Matching vectors are determined for each of the game contexts, wherein each matching vector has a corresponding matching value (e.g., quality factor or Q-factor) indicating the quality of matching the corresponding expert game context to the game context420-P1of player P1. The matching engine123is configured to select one of the experts from the pool of experts440. As shown, expert E5is selected, and provided as an output435from the matching engine123. The output435is provided to the help session controller120for purposes of generating and managing the help session providing assistance to player P1. As previously described, one or more methods may be implemented for selection of the expert. For example, the pool of experts440may be further filtered by applying a threshold to the matching values, wherein experts associated with matching values that meet the threshold criteria are considered for selection. In one implementation, the highest quality matching value is used for selection of the expert. That is, the highest matching value is used for selection. In another example, a notification of a help session request is delivered to experts associated with matching values that meet the threshold criteria. The expert that responds first to the notification may be selected for the help session. In still another example, any of the experts associated with matching values that meet the threshold criteria may be selected, such as through random selection, first selection, etc. A further discussion of the game contexts420and the matching process of the matching engine123is provided in relation toFIGS.5A-5B. In one embodiment, rather than matching the player P1to an expert, the matching engine123may select a friend of the expert from a pool of friends. For example, the friends may be social network friends established through one or more social networks. The help session controller is configured to establish and manage a help session to provide real-time assistance to player P1. For example, a communication session is generated between a device of player P1(e.g., the user interface110-P1) and a device of the expert E5(e.g., the user interface110-E5). In one embodiment, the communication session is generated between a communication session manager of the help session controller120, the device of the player P1and a device of the expert E5. In another embodiment, the communication session is generated and establishes direct communication between the device of player P1and the device of expert E5. One or more communication channels may be established in the communication session. For example, one or more of a a voice channel451, a text channel452, a screen share channel453, and/or a share play channel may be established. As shown, the voice channel451is a two-way communication path so that player P1and expert E5can talk to and listen to each other's voice communication. Also, the text channel452is a two-way communication path so that the player P1and expert E5can communication with each other by texting. In addition, the screen share channel453may be a one-way communication path so that video from the game play of player P1is delivered to the device of expert E5for viewing. Further, the share play channel454may be a two-way communication path so that input controls may be communicated from the expert E5to the gaming engine local to the player P1, or to another gaming engine at a back-end server. Separate control channel may be established to pass control and other information between the help session controller120and user interface110-P1or to user interface110-E5. For example, instructions may be delivered to the user interface110-P1that block input controls originating from player P1, or to send video over the screen share channel453. In addition, rating information may be delivered over the channel455providing rating information. For example, after the help session, the player P1may provide a rating of the help session over channel455. In addition, player P1may provide a rating of the overall performance of expert E5(e.g., personality, helpfulness, ability to control the release of spoilers, depth of knowledge for the gaming application, etc.). Also, expert E5may provide a rating of the player P1(e.g., level of cooperation, ability to accept help, personality, gratitude, etc.). In one embodiment, the help session is implemented on a second computing device associated with the player P1concurrent with the game play of the user. For example, in one embodiment there may be two communication channels delivering information, such as a first communication channel established to deliver data representative of game play of the user to a first computing device of player P1, and a second communication channel established to deliver data associated with the help session to the second computing device of player P2. For example, the first computing device may be a local gaming console and/or display, and the second computing device may be a smartphone. In another embodiment, the help session may be delivered along with the data representative of game play of the user, such as through a split screen including a first screen showing the game play and a second screen showing the help session. FIG.4Bis a data flow diagram illustrating the game play of an expert E-5playing a gaming application executing locally using game state data confined to the context of a player P1requesting assistance, wherein the game play of the expert E5is streamed to the player P1, in accordance with one embodiment of the present disclosure.FIG.4Bprovides further illustration of the flow of information between the player P1and the selected expert E5as described inFIG.4A. As shown, player P1is sending a query that requests assistance during a game play of the player playing a gaming application. The query is delivered to a back-end server205over path “A”, such as through a local computing device100of the player P1. The local computing device100may be a gaming console, wherein the gaming application may be executing on the device100, or may be executing in a cloud gaming network communicating with local device100. Server205sends a notification to one or more experts E1, E2. . . En, as previously described. For example, the notification may be broadcast to multiple experts or to one expert at a time. For example, a notification is delivered to a device of expert E5along path “B”, and an acceptance of the request to provide assistance is also delivered back to the server205along path “B”. Instructions and/or communication may be passed between the server205and the device of expert E5over path “F”. For example, the instructions may be used to establish one or more communication sessions. At this point, server205may establish a communication session for expert E5to provide assistance to player P1. In one embodiment, the communication session is established between devices of the player P1and expert E5through server205(e.g., paths “C”, “D”, and “E”). In another embodiment, the communication session is established between devices of player P1and expert E5through a peer-to-peer network connection (e.g., path “G”). For example, the peer-to-peer network connection may be a WebRTC (web real-time communication) connection that allows web browsers and mobile applications on one more devices to commutation with real-time communication (RTC) through application programing interfaces (APIs). As previously described, various forms of communication may be used to enable the expert E5to provide assistance, such as over path “E”, including text, voice, video, video chat, etc. In one embodiment, a ShareScreen request is made to share the game play of player P1with expert E5, wherein the game play (e.g., video, audio, etc.) may be delivered from the device100of player P1to device of expert E5over path “C”, or through peer-to-peer connection path “G”. In addition, a SharePlay request may be made to share the controls of the game play between player P1and expert E5, wherein the controller input by the expert E5is used to control the game play. In that case, the controller input is delivered from the device of expert E5to the device100of player P1over path “D”, or through peer-to-peer connection path “G”. in one embodiment, the communication provide through paths “C”, “D”, and/or “E” may be provided over peer-to-peer connection path “G”. In still another embodiment, to protect the game play of the player P1, the expert E5generates an independent expert game play that is limited to the current context of the player P1, such that the game play of expert E5can be focused on providing assistance to the player P1that is relevant to the assistance query. For example, the player P1may not want anyone to contaminate his or her game play, such that player P1wants to finish the gaming application without an expert playing the game play to get through a difficult task. The player P1may want to see how a difficult task may be performed and/or beaten. As such, limited state information may be provided to a device of the expert E5. The limited state information may be game state data that provides just enough information to replicate the current context of player P1on the device of expert E5. In that manner, the expert E5can play the gaming application on a local device to generate expert game play for the current context, and stream the expert game play back to the player P1. For example, the expert E5may be using a local mobile device411to provide gaming assistance, in accordance with one embodiment of the present disclosure. The mobile device411may be a tablet, or mobile phone, etc. The limited state information is loaded onto the mobile device411to execute the gaming application within the limited current context. The limited state information may include formatting data so that the gaming application may be executed within and displayed on device411. In addition, input control buttons may be generated and displayed on a touch surface of device411so that the expert E5can generate input controls. As shown, the game play of expert E5as executed on mobile device411is delivered to device100of player P1for interaction (e.g., viewing, etc.). For example, player P1is viewing the expert game play E5on user interface110-P1, as previously introduced. User interface110-P1may have one or more windows showing the expert game play, communication from the expert E5(e.g., video chat, text, voice, etc.), input controller sequence, etc. The streamed information may be provided in a peer-to-peer connection (e.g., WebRTC) over path G1, or through a communication session having the server as an intermediary node. In another embodiment, the expert E5may have access to a local computing device, such as game console413, or computer processor. Expert E5may be playing the same gaming application or a different application, or may be readily available to play any gaming application through game console413, controller406, and display412. Upon receiving the notification, the server205may send instructions to the game console413to load up the gaming application. The game state relating to the current context of player P1may also be loaded. In one case, the gaming application is available to the game console413, such as through local memory, or through cloud gaming network services. In that manner, if the expert E5is requested to provide assistance through his or her own game play, the gaming application is ready to receive control input for the given current context. The game console may receive formatting information so that the gaming application can execute on the game console413and be responsive to controller input provide by the controller406. As shown, the game play of expert E5as executed on gaming console413(or in cooperation with gaming console413, such as executing on a cloud gaming network) is delivered to device100of player P1for interaction (e.g., viewing, etc.). For example, player P1is viewing the expert game play E5on user interface110-P1, as previously introduced. User interface110-P1may have one or more windows showing the expert game play, communication from the expert E5(e.g., video chat, text, voice, etc.), input controller sequence, etc. The streamed information may be provided in a peer-to-peer connection (e.g., WebRTC) over path G2, or through a communication session having the server as an intermediary node. FIG.5Ais an illustration of the collection of game context of a player playing a gaming application, and the matching of the game context to game contexts of experts of the gaming application during the selection of an expert that provides real-time assistance to advance the game play of the player, the section made in response to a request by the player for assistance, in accordance with one embodiment of the present disclosure. The matching process shown inFIG.5Amay be implemented by the matching engine123. Further,FIG.5Amay be aligned withFIG.4to show that game contexts of the player P1and a pool of experts440are compared by matching engine123to provide an output435indicating a selected expert E5. In particular,FIG.5Ais relevant to a particular moment in time in the game play of player P1. For example, timeline520shows various points in the game play of player P1. These points may be assigned a time value in the timeline520, such as time t0. . . time t10. . . to time t25, which is the current time. In one embodiment, the timeline may be used to provide replays of the game play of player P1to the selected expert of the help session. For instance, a replay may rewind the game play for a pre-selected period of time (e.g., one time period, two time periods, etc.) as indicated by the timeline. Snapshots may be associated with each point in time of the timeline520, wherein the snapshots are used to generate the replay. In another embodiment, the expert is able to select how much rewinding to perform. For example, the timeline510is sent to the a device of the expert, wherein the timeline comprises a plurality of snapshots generated during the game play of player P1. A selection of a snapshot is received from the device of the first expert (e.g., associated with a point in time). The game play is rewound to selected snapshot such that the game play begins from the selected snapshot on the device of the first expert. After the replay catches up to the current frame of the game play, the live game play may then be presented to the expert. In addition, screen shot510shows a current video frame generated during the game play of player P1at time t25. Screen shot510shows the live game play of player P1. Purely for illustration purposes only, screen shot510may include a battle between Kratos511and the enemy combatant512. In the God of War gaming application, Kratos is a Spartan warrior of Greek mythology, who is tasked with killing Ares, the God of War. In the game play, player P1may control Kratos511. As previously described, matching engine123takes as input the game context420-P1of player P1and game contexts of a pool of experts440. For example, each of the game contexts are configured similarly for player P1and the pool of experts440, and include parameters545previously described, such as game state and user/player saved data. For example, parameters545may include game state data, such as: character, character race or type, current quest facing the character, next quest for the character, location of the game play in the gaming environment, level of the game play in the gaming application, assets of the character (e.g., shield type, sword type, bomb type, etc.), loadout, skill set of the character (jump skill, stamina, etc.), etc. Parameters545may include user save data (e.g., user profile data), such as: overall gaming skill of the player or corresponding expert, recency of playing the gaming application, willingness to seek help, etc. The matching engine123is configured to generate matching vectors for each of the game contexts of the pool of experts440, as previously described. For example, criteria matching540is performed by the matching engine123to generate the matching vectors. Each of the matching vectors has a matching value (e.g., quality factor or Q-factor) that indicates the quality of matching the corresponding expert game context to the game context of player P1. For example,FIG.5Ashows the comparison of the game context420-E1of expert E1to the game context420-P1of player P1in column581. A check mark indicates a match for a corresponding parameter or criteria of the game context. The absence of a check mark indicates no match. In one implementation, the check mark is given a value of 1, but can be given any value. As shown, the game context420-E1matches at least the parameters for character race, shield, stamina, and bombs. This indicates that there is a strong likelihood that the game play of expert E1has these defining parameters. This comparison process can be repeated by the matching engine123for each game context associated with the pool of experts440. For instance,FIG.5Ashows the comparison of the game context420-E5of expert E5to the game context420-P1of player P1in column585. As shown, the game context420-E5closely matches the game context420-P1, as all of the parameters have corresponding check marks. In addition, matching engine123may apply a weighting application550is performed on the matching vectors. For example, col555shows weighting factors for each of the parameters545in the game contexts of the gaming application used by the matching engine123. The weighting defines an importance of a corresponding parameter. In one implementation, the larger the weighting factor, the higher the performance. Of course, the reverse can be implemented throughout the selection process. As shown, character race has a weight of 0.6, shield a weight of 0.8, sword a weight of 0.4, jump skill a weight of 1.2, stamina a weight of 1.4 . . . bombs a weight of 0.2. That is, stamina and jump skill of the character is highly valued in the comparison. These factors may be important in accomplishing a particular task or quest. Also, additional factors560may be considered by the matching engine123. These factors may also be given a weight when comparing the game contexts. For instance, additional factors may include the rating of the expert, the ranking of the expert, whether the expert has reached a gold star status indicating the highest possible ranking, availability, etc. The matching engine123performs an expert selection process570. For example, the matching vectors are given a matching value after performing criteria matching540, weighting550, and the consideration of additional factor560. For example, for expert E1a matching value591(3.0) is generated. Also, for expert E5a matching value595(4.6) is generated. Between the two experts, expert E5has a higher matching value, which may indicate a better quality match, such that expert E5may be better suited in providing assistance for the query of player P1than expert E1. Expert selection as performed by the expert selection process570may utilize any number of selection processes or criteria, as previously described. For illustration, if a highest quality match is used, then the highest value of the matching vector may indicate the highest quality match. In that case, the matching engine123would provide as output435the selection of expert E5, which is aligned withFIG.4. FIG.5Bis an illustration of the types of information collected for a game context420of a corresponding player or expert playing a gaming application, in accordance with one embodiment of the present disclosure. The game context420may be associated with a player or expert. For instance, following the example ofFIG.5A, respective game contexts for player P1and experts in the pool of experts440may be similarly configured. Purely for illustration, overall game context420may be captured at various points in the game play of a corresponding player or expert. That is, a plurality of game contexts may be captured, one for each defined time stamp. As shown, game context may be arranged as a block of data. Vertical slices are defined, and include vertical slice595for game state501, vertical slice596for user data502, vertical slice597for expert data503, and vertical slice598for a time stamp504. In addition, each horizontal slice defines a particular game context for a corresponding time stamp. For example, at time to, the horizontal slice505defines the game context420-t0, at time t1, the horizontal slice506defines the game context420-t1. . . at time tn, the horizontal slice507defines the game context420-tn. That is, for each horizontal slice (corresponding to a particular game context) information is provided for each parameter in respective slices. For example, for time to and the corresponding game context420-t0, for vertical slice595corresponding to game state501, the corresponding intersection of the horizontal slice505includes information for one or more parameters, such as: character, character race, quest, level, location, loadout, character skill set, etc. In addition, for time to and the corresponding game context420-t0, for vertical slice596corresponding to user data502, the corresponding intersection of the horizontal slice505includes information for one or more parameters, such as: user skill set, recency of play, helpability factor (willingness to accept help), user rating or ranking, etc. Also, for time to and the corresponding game context420-t0, for vertical slice596corresponding to expert data503, the corresponding intersection of the horizontal slice505includes information for one or more parameters, such as: expert skill set, recency of play, spoiler factor indicating how loose the expert is with spoilers, expert rating, expert ranking, availability, help history, etc. The information for categories in game context420including game state501, user data502, and expert data503are provided merely for illustration purposes, and may be moved between each of the defined categories for game context, or shared between categories, or include different information in each category. FIG.6Ais an illustration of a user interface110-P1for user P1, wherein the user interface provides real-time assistance during the game play of player P1, in accordance with one embodiment of the present disclosure. Player P1may be playing a gaming application. The assistance is provided through a help session, wherein the help session is implemented by connecting the player P1to an expert over a communication session. InFIG.6A, the help session may be delivered along with the data representative of game play of the player P1, such as through a split screen including a first screen showing the game play and a second screen showing the help session. As shown, user interface110-P1shows a screen shot510′ of the current game play in the first screen or window. For example, the screen shot may show an interaction between Kratos511and an enemy combatant512in the gaming application—God of War, as previously introduced. In addition, the user interface110-P1shows the second screen or window610showing the help session. Window610displays a two-way textual conversation between the player P1and the expert. For illustration, player P1may be named River Hsu and the expert may be named Aspen. The help session ofFIG.6Ais generated in real-time, and as such delivered concurrent with the game play of the player P1, such that the information provided through the help session supports the game play of the player P1. In particular, window610shows the textual entries made during the running conversation between player P1(River) and the expert—Aspen. In the conversation of the help session, the expert is quickly determining the problem that player P1is faced with, and is providing assistance in solving that problem. For example, the problem may be that player P1is unable to beat the Boss at the end of Level3. The expert (Aspen) is providing instructions to player P1(River) during the game play that is controlled by player P1. These instructions were helpful, as player P1beats the Boss and thanks the expert at the end of the dialogue. FIG.6Bis an illustration of a user interface providing real-time assistance during game play of a player (e.g., player P1) playing a gaming application by connecting the player to an expert over a communication session, in accordance with one embodiment of the present disclosure. The assistance is provided through a help session, wherein the help session is implemented by connecting the player P1to an expert E5(Aspen) over a communication session.FIG.6Bis aligned withFIG.6A, wherein expert E5and player P1are participating in the help session. InFIG.6B, the help session may be delivered along with the data representative of game play of the player P1, such as through a split screen including a first screen showing the game play and a second screen showing the help session. As shown, user interface110′-P1shows a screen shot510″ of the current game play in the first screen or window. For example, the screen shot may show an interaction between Kratos511and an enemy combatant512in the gaming application—God of War, as previously introduced. In addition, the user interface110′-P1shows the second screen or window630showing the help session. In the help session, the expert and the player P1may have agreed to a share play functionality, wherein the control of the game play of player P1may be taken over by the expert, as previously described. In one implementation, user interface110′-P1may include a window650displaying real-time video of the expert in synchronization with a voice communication session in the help session. The expert may be providing instructions or assistance through the embedded video that is synchronized with audio between the expert and the player P1. Further, in the share play functionality, the expert has taken over control of the game play of the player. For example, the expert may have taken control so that the character Kratos511will beat the enemy combatant512in a battle, which previously the player P1could not accomplish. The game play as controlled by the expert is shown in screen or window510″. Additionally, window630may provide information related to the assistance provided by the expert. For example, the sequence of control inputs (e.g., input commands) made by the expert when battling the enemy combatant512may be provided. Purely for illustration, the expert may have told the player P1that the key to beating the enemy combatant512(as the Boss) is performing the “hammer blow sequence.” The player P1may not know that sequence, or may not be proficient in performing that sequence, and has authorized the expert to take over control the game play in order to beat the enemy combatant512. As the expert is submitting input commands for controlling the game play, the associated controller inputs or actions are displayed in window630. For example, a sequence of controller inputs660may include right button, left button, A button, A button, O button, X button, etc. FIG.8illustrates components of an example device800that can be used to perform aspects of the various embodiments of the present disclosure. For example,FIG.8illustrates an exemplary hardware system suitable for implementing a device that provides services in support of a user, such as providing real-time assistance to a player playing a gaming application by connecting that player to an expert in a help session, in accordance with one embodiment. This block diagram illustrates a device800that can incorporate or can be a personal computer, video game console, personal digital assistant, or other digital device, suitable for practicing an embodiment of the disclosure. Device800includes a central processing unit (CPU)802for running software applications and optionally an operating system. CPU802may be comprised of one or more homogeneous or heterogeneous processing cores. For example, CPU802is one or more general-purpose microprocessors having one or more processing cores. Further embodiments can be implemented using one or more CPUs with microprocessor architectures specifically adapted for highly parallel and computationally intensive applications, such as media and interactive entertainment applications, or applications configured for providing real-time assistance either through a live help session or through recorded help sessions as implemented through at least the help session controller120, as previously described. Device800may be a localized to a player requesting assistance (e.g., game console), or remote from the player (e.g., back-end server processor). Memory804stores applications and data for use by the CPU802. Storage806provides non-volatile storage and other computer readable media for applications and data and may include fixed disk drives, removable disk drives, flash memory devices, and CD-ROM, DVD-ROM, Blu-ray, HD-DVD, or other optical storage devices, as well as signal transmission and storage media. User input devices808communicate user inputs from one or more users to device800, examples of which may include keyboards, mice, joysticks, touch pads, touch screens, still or video recorders/cameras, tracking devices for recognizing gestures, and/or microphones. Network interface814allows device800to communicate with other computer systems via an electronic communications network, and may include wired or wireless communication over local area networks and wide area networks such as the internet. An audio processor812is adapted to generate analog or digital audio output from instructions and/or data provided by the CPU802, memory804, and/or storage806. The components of device800, including CPU802, memory804, data storage806, user input devices808, network interface810, and audio processor812are connected via one or more data buses822 A graphics subsystem814is further connected with data bus822and the components of the device800. The graphics subsystem814includes a graphics processing unit (GPU)816and graphics memory818. Graphics memory818includes a display memory (e.g., a frame buffer) used for storing pixel data for each pixel of an output image. Graphics memory818can be integrated in the same device as GPU816, connected as a separate device with GPU816, and/or implemented within memory804. Pixel data can be provided to graphics memory818directly from the CPU802. Alternatively, CPU802provides the GPU816with data and/or instructions defining the desired output images, from which the GPU816generates the pixel data of one or more output images. The data and/or instructions defining the desired output images can be stored in memory804and/or graphics memory818. In an embodiment, the GPU816includes 3D rendering capabilities for generating pixel data for output images from instructions and data defining the geometry, lighting, shading, texturing, motion, and/or camera parameters for a scene. The GPU816can further include one or more programmable execution units capable of executing shader programs. The graphics subsystem814periodically outputs pixel data for an image from graphics memory818to be displayed on display device810, or to be projected by projection system840. Display device810can be any device capable of displaying visual information in response to a signal from the device800, including CRT, LCD, plasma, and OLED displays. Device800can provide the display device810with an analog or digital signal, for example. While specific embodiments have been provided to demonstrate the providing of real-time assistance during game play of a player playing a gaming application through live help sessions (e.g., connecting player to an expert through a communication session), or through recorded help sessions (e.g., connecting player to a recorded help session transmitted over a communication session), these are described by way of example and not by way of limitation. Those skilled in the art having read the present disclosure will realize additional embodiments falling within the spirit and scope of the present disclosure. It should be noted, that access services, such as providing access to games of the current embodiments, delivered over a wide geographical area often use cloud computing. Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users do not need to be an expert in the technology infrastructure in the “cloud” that supports them. Cloud computing can be divided into different services, such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Cloud computing services often provide common applications, such as video games, online that are accessed from a web browser, while the software and data are stored on the servers in the cloud. The term cloud is used as a metaphor for the Internet, based on how the Internet is depicted in computer network diagrams and is an abstraction for the complex infrastructure it conceals. A Game Processing Server (GPS) (or simply a “game server”) is used by game clients to play single and multiplayer video games. Most video games played over the Internet operate via a connection to the game server. Typically, games use a dedicated server application that collects data from players and distributes it to other players. This is more efficient and effective than a peer-to-peer arrangement, but it requires a separate server to host the server application. In another embodiment, the GPS establishes communication between the players and their respective game-playing devices to exchange information without relying on the centralized GPS. Dedicated GPSs are servers which run independently of the client. Such servers are usually run on dedicated hardware located in data centers, providing more bandwidth and dedicated processing power. Dedicated servers are the preferred method of hosting game servers for most PC-based multiplayer games. Massively multiplayer online games run on dedicated servers usually hosted by a software company that owns the game title, allowing them to control and update content. Users access the remote services with client devices, which include at least a CPU, a display and I/O. The client device can be a PC, a mobile phone, a netbook, a PDA, etc. In one embodiment, the network executing on the game server recognizes the type of device used by the client and adjusts the communication method employed. In other cases, client devices use a standard communications method, such as html, to access the application on the game server over the internet. Embodiments of the present disclosure may be practiced with various computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. The disclosure can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wire-based or wireless network. It should be appreciated that a given video game or gaming application may be developed for a specific platform and a specific associated controller device. However, when such a game is made available via a game cloud system as presented herein, the user may be accessing the video game with a different controller device. For example, a game might have been developed for a game console and its associated controller, whereas the user might be accessing a cloud-based version of the game from a personal computer utilizing a keyboard and mouse. In such a scenario, the input parameter configuration can define a mapping from inputs which can be generated by the user's available controller device (in this case, a keyboard and mouse) to inputs which are acceptable for the execution of the video game. In another example, a user may access the cloud gaming system via a tablet computing device, a touchscreen smartphone, or other touchscreen driven device. In this case, the client device and the controller device are integrated together in the same device, with inputs being provided by way of detected touchscreen inputs/gestures. For such a device, the input parameter configuration may define particular touchscreen inputs corresponding to game inputs for the video game. For example, buttons, a directional pad, or other types of input elements might be displayed or overlaid during running of the video game to indicate locations on the touchscreen that the user can touch to generate a game input. Gestures such as swipes in particular directions or specific touch motions may also be detected as game inputs. In one embodiment, a tutorial can be provided to the user indicating how to provide input via the touchscreen for gameplay, e.g. prior to beginning gameplay of the video game, so as to acclimate the user to the operation of the controls on the touchscreen. In some embodiments, the client device serves as the connection point for a controller device. That is, the controller device communicates via a wireless or wired connection with the client device to transmit inputs from the controller device to the client device. The client device may in turn process these inputs and then transmit input data to the cloud game server via a network (e.g. accessed via a local networking device such as a router). However, in other embodiments, the controller can itself be a networked device, with the ability to communicate inputs directly via the network to the cloud game server, without being required to communicate such inputs through the client device first. For example, the controller might connect to a local networking device (such as the aforementioned router) to send to and receive data from the cloud game server. Thus, while the client device may still be required to receive video output from the cloud-based video game and render it on a local display, input latency can be reduced by allowing the controller to send inputs directly over the network to the cloud game server, bypassing the client device. In one embodiment, a networked controller and client device can be configured to send certain types of inputs directly from the controller to the cloud game server, and other types of inputs via the client device. For example, inputs whose detection does not depend on any additional hardware or processing apart from the controller itself can be sent directly from the controller to the cloud game server via the network, bypassing the client device. Such inputs may include button inputs, joystick inputs, embedded motion detection inputs (e.g. accelerometer, magnetometer, gyroscope), etc. However, inputs that utilize additional hardware or require processing by the client device can be sent by the client device to the cloud game server. These might include captured video or audio from the game environment that may be processed by the client device before sending to the cloud game server. Additionally, inputs from motion detection hardware of the controller might be processed by the client device in conjunction with captured video to detect the position and motion of the controller, which would subsequently be communicated by the client device to the cloud game server. It should be appreciated that the controller device in accordance with various embodiments may also receive data (e.g. feedback data) from the client device or directly from the cloud gaming server. It should be understood that the embodiments described herein may be executed on any type of client device. In some embodiments, the client device is a head mounted display (HMD), or projection system.FIG.9, a diagram illustrating components of a head-mounted display102is shown, in accordance with an embodiment of the disclosure. The HMD102may be configured to receive real-time assistance providing during game play of a player playing a gaming application either through a live help session or through a recorded help session. The head-mounted display102includes a processor900for executing program instructions. A memory902is provided for storage purposes, and may include both volatile and non-volatile memory. A display904is included which provides a visual interface that a user may view. A battery906is provided as a power source for the head-mounted display102. A motion detection module908may include any of various kinds of motion sensitive hardware, such as a magnetometer910A, an accelerometer912, and a gyroscope914. An accelerometer is a device for measuring acceleration and gravity induced reaction forces. Single and multiple axis models are available to detect magnitude and direction of the acceleration in different directions. The accelerometer is used to sense inclination, vibration, and shock. In one embodiment, three accelerometers912are used to provide the direction of gravity, which gives an absolute reference for two angles (world-space pitch and world-space roll). A magnetometer measures the strength and direction of the magnetic field in the vicinity of the head-mounted display. In one embodiment, three magnetometers910A are used within the head-mounted display, ensuring an absolute reference for the world-space yaw angle. In one embodiment, the magnetometer is designed to span the earth magnetic field, which is ±80 microtesla. Magnetometers are affected by metal, and provide a yaw measurement that is monotonic with actual yaw. The magnetic field may be warped due to metal in the environment, which causes a warp in the yaw measurement. If necessary, this warp can be calibrated using information from other sensors such as the gyroscope or the camera. In one embodiment, accelerometer912is used together with magnetometer910A to obtain the inclination and azimuth of the head-mounted display102. A gyroscope is a device for measuring or maintaining orientation, based on the principles of angular momentum. In one embodiment, three gyroscopes914provide information about movement across the respective axis (x, y and z) based on inertial sensing. The gyroscopes help in detecting fast rotations. However, the gyroscopes can drift overtime without the existence of an absolute reference. This requires resetting the gyroscopes periodically, which can be done using other available information, such as positional/orientation determination based on visual tracking of an object, accelerometer, magnetometer, etc. A camera916is provided for capturing images and image streams of a real environment. More than one camera may be included in the head-mounted display102, including a camera that is rear-facing (directed away from a user when the user is viewing the display of the head-mounted display102), and a camera that is front-facing (directed towards the user when the user is viewing the display of the head-mounted display102). Additionally, a depth camera918may be included in the head-mounted display102for sensing depth information of objects in a real environment. In one embodiment, a camera integrated on a front face of the HMD may be used to provide warnings regarding safety. For example, if the user is approaching a wall or object, the user may be warned. In one embodiment, the use may be provided with an outline view of physical objects in the room, to warn the user of their presence. The outline may, for example, be an overlay in the virtual environment. In some embodiments, the HMD user may be provided with a view to a reference marker, that is overlaid in, for example, the floor. For instance, the marker may provide the user a reference of where the center of the room is, which in which the user is playing the game. This may provide, for example, visual information to the user of where the user should move to avoid hitting a wall or other object in the room. Tactile warnings can also be provided to the user, and/or audio warnings, to provide more safety for when the user wears and plays games or navigates content with an HMD. The head-mounted display102includes speakers920for providing audio output. Also, a microphone922may be included for capturing audio from the real environment, including sounds from the ambient environment, speech made by the user, etc. The head-mounted display102includes tactile feedback module924for providing tactile feedback to the user. In one embodiment, the tactile feedback module924is capable of causing movement and/or vibration of the head-mounted display102so as to provide tactile feedback to the user. LEDs926are provided as visual indicators of statuses of the head-mounted display102. For example, an LED may indicate battery level, power on, etc. A card reader928is provided to enable the head-mounted display102to read and write information to and from a memory card. A USB interface930is included as one example of an interface for enabling connection of peripheral devices, or connection to other devices, such as other portable devices, computers, etc. In various embodiments of the head-mounted display102, any of various kinds of interfaces may be included to enable greater connectivity of the head-mounted display102. A Wi-Fi module932is included for enabling connection to the Internet via wireless networking technologies. Also, the head-mounted display102includes a Bluetooth module934for enabling wireless connection to other devices. A communications link936may also be included for connection to other devices. In one embodiment, the communications link936utilizes infrared transmission for wireless communication. In other embodiments, the communications link936may utilize any of various wireless or wired transmission protocols for communication with other devices. Input buttons/sensors938are included to provide an input interface for the user. Any of various kinds of input interfaces may be included, such as buttons, touchpad, joystick, trackball, etc. An ultra-sonic communication module940may be included in head-mounted display102for facilitating communication with other devices via ultra-sonic technologies. Bio-sensors942are included to enable detection of physiological data from a user. In one embodiment, the bio-sensors942include one or more dry electrodes for detecting bio-electric signals of the user through the user's skin. Photo-sensors944are included to respond to signals from emitters (e.g., infrared base stations) placed in a 3-dimensional physical environment. The gaming console analyzes the information from the photo-sensors944and emitters to determine position and orientation information related to the head-mounted display102. In addition, gaze tracking system965is included and configured to enable tracking of the gaze of the user. For example, system965may include gaze tracking cameras which captures images of the user's eyes, which are then analyzed to determine the gaze direction of the user. In one embodiment, information about the gaze direction of the user can be utilized to affect the video rendering. Video rendering in the direction of gaze can be prioritized or emphasized, such as by providing greater detail, higher resolution through foveated rendering, higher resolution of a particle system effect displayed in the foveal region, lower resolution of a particle system effect displayed outside the foveal region, or faster updates in the region where the user is looking. The foregoing components of head-mounted display102have been described as merely exemplary components that may be included in head-mounted display102. In various embodiments of the disclosure, the head-mounted display102may or may not include some of the various aforementioned components. Embodiments of the head-mounted display102may additionally include other components not presently described, but known in the art, for purposes of facilitating aspects of the present disclosure as herein described. It will be appreciated by those skilled in the art that in various embodiments of the disclosure, the aforementioned head mounted device may be utilized in conjunction with an interactive application displayed on a display to provide various interactive functions. The exemplary embodiments described herein are provided by way of example only, and not by way of limitation. It should be understood that the various embodiments defined herein may be combined or assembled into specific implementations using the various features disclosed herein. Thus, the examples provided are just some possible examples, without limitation to the various implementations that are possible by combining the various elements to define many more implementations. In some examples, some implementations may include fewer elements, without departing from the spirit of the disclosed or equivalent implementations. Embodiments of the present disclosure may be practiced with various computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. Embodiments of the present disclosure can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wire-based or wireless network. With the above embodiments in mind, it should be understood that embodiments of the present disclosure can employ various computer-implemented operations involving data stored in computer systems. These operations are those requiring physical manipulation of physical quantities. Any of the operations described herein that form part of embodiments of the present disclosure are useful machine operations. Embodiments of the invention also relate to a device or an apparatus for performing these operations. The apparatus can be specially constructed for the required purpose, or the apparatus can be a general-purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general-purpose machines can be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations. The disclosure can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data, which can be thereafter be read by a computer system. Examples of the computer readable medium include hard drives, network attached storage (NAS), read-only memory, random-access memory, CD-ROMs, CD-Rs, CD-RWs, magnetic tapes and other optical and non-optical data storage devices. The computer readable medium can include computer readable tangible medium distributed over a network-coupled computer system so that the computer readable code is stored and executed in a distributed fashion. Although the method operations were described in a specific order, it should be understood that other housekeeping operations may be performed in between operations, or operations may be adjusted so that they occur at slightly different times, or may be distributed in a system which allows the occurrence of the processing operations at various intervals associated with the processing, as long as the processing of the overlay operations are performed in the desired way. Although the foregoing disclosure has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications can be practiced within the scope of the appended claims. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and embodiments of the present disclosure is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.
138,795
11857884
DETAILED DESCRIPTION The subject disclosure is directed to a mechanical-mathematical diagonal number board game. The game is conducted by placing game pieces according to the configuration of the game board, wherein each player uses mathematical skills to deduce the amount of points available with each move. Each player is also encouraged to keep track of other players' performances in order to maximize their own score accumulation. The game is intended to have a number of pre-configured board pieces that vary in difficulty, and players can set up multiple player sessions of varying difficulty with one game system. The game is played with a set of rules that uses subtraction between adjacent diagonal connecting single digit number pieces combined with the mathematical addition of their sum differences to score game points and win. The detailed description provided below in connection with the appended drawings is intended as a description of examples and is not intended to represent the only forms in which the present examples can be constructed or utilized. The description sets forth functions of the examples and sequences of steps for constructing and operating the examples. However, the same or equivalent functions and sequences can be accomplished by different examples. References to “one embodiment,” “an embodiment,” “an example embodiment,” “one implementation,” “an implementation,” “one example,” “an example” and the like, indicate that the described embodiment, implementation or example can include a particular feature, structure or characteristic, but every embodiment, implementation or example can not necessarily include the particular feature, structure or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment, implementation or example. Further, when a particular feature, structure or characteristic is described in connection with an embodiment, implementation or example, it is to be appreciated that such feature, structure or characteristic can be implemented in connection with other embodiments, implementations or examples whether or not explicitly described. Numerous specific details are set forth in order to provide a thorough understanding of one or more embodiments of the described subject matter. It is to be appreciated, however, that such embodiments can be practiced without these specific details. Various features of the subject disclosure are now described in more detail with reference to the drawings, wherein like numerals generally refer to like or corresponding elements throughout. The drawings and detailed description are not intended to limit the claimed subject matter to the particular form described. Rather, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the claimed subject matter. The subject disclosure is directed to board games and, more specifically, to competitive mathematical board games. The board game is a mechanical-mathematical diagonal number board game with multiple-colored boards that is played with a set of rules that uses single digit subtraction between adjacent diagonal connecting game board number pieces. Then, the sum differences are added together to score game points to win. The first player who reaches the DIAGNUM score of 99 points win the game. A set of ten two-sided Interchangeable multicolor diagonal number game boards are incorporated with this invention that are inserted into a game board cartridge riding atop a game board cartridge carrier. Optionally, a mechanical swivel device is used to aid the players with visual clarity of the printed game board numbers when swiveled around. The mechanical swivel device rotates 360 degrees giving a front-face-view of the printed numbers on the game board. Stand-alone game board pyramid number pieces (PNP's) are illustrated as being the ideal game board number piece (GNP) for the game. A shaker bag is used for withdrawing the PNPs out for game play and storing them away at the end of the game. A tabletop quad-person scoreboard, scoreboard twin scoring pins, and a 30-second sand timer is also disclosed. In one exemplary embodiment of the mechanical-mathematical diagonal number board game, the system comprises a board game play base that is square shaped, comprising a number piece holding cell oriented alongside each side of the board game play base and a timer spot oriented at each corner of the board game play base, wherein each number piece holding cell is situated at a designated letter position. The system also includes a mechanical swivel device located at center of the board game play base, a game board cartridge carrier comprising a game board cartridge base and a game board grid, and a game board cartridge marked with a 7 square by 7 square grids, configured to insert into the game board cartridge carrier. The game board is designed so that each of the 7 square by 7 square grids aligns with the game board grids and is marked with a number selected from a group of numbers from 0 to 9, wherein the number does not repeat in another grid on a same row, column, or diagonal on the 7 square by 7 square grid. The board game system comprises at least 40 pyramid number pieces, each pyramid number piece comprises a number selected from a group of numbers from 0 to 9, wherein each pyramid number piece is configured to align with each of the 7 square by 7 square grids. The mathematical board game can be conducted by placing each of the 40 pyramid number pieces on one of the 7 square by 7 square grid, and can accumulate a score point, wherein the score point comprises a sum of differences of numbers between at least two of the pyramid number pieces connected diagonally on the game board cartridge, wherein the board game is decided by the score point. Referring to now to the drawings and, in particular toFIGS.1a-3d, a mechanical-mathematical diagonal number board game assembly, generally designated with the numeral100, is shown.FIGS.1a-1cillustrate the board game assembly100fully assembled. A partially assembled board game, generally designated as the numeral200, is shown inFIG.2. As shown inFIG.1B, the board game assembly100can include a mechanical swivel device110(shown in phantom inFIG.1c) that is positioned between a game cartridge carrier114and an essentially flat play base112. The game cartridge grid301is placed within the game cartridge carrier114. The mechanical swivel device110is essentially placed at the center of the assembly100to provide players with the ability to rotate the game assembly100up to 360 degrees. The game cartridge grid301is shown in more detail inFIG.3a. The game board302is a multicolor game board, as shown inFIG.3b. The game board302can be referred to as game board #6 in this exemplary embodiment. The configuration of the board game assembly100is further illustrated in depth inFIG.1c. The game cartridge grid301sits within the game cartridge carrier114, which is attached to the board game play base112through a swivel device110. The game board302is inserted into the game cartridge grid301. The game board302is placed within the trap area116of the game cartridge grid301. In the exemplary embodiment, the game cartridge grid301has a dimension of 7″×7″×1“, and the game cartridge carrier114has a height of ½” with a parameter enveloping the game cartridge grid301. The game cartridge grid301comprises a forty-nine space hallow grid located on top of a base, forming a hollow trap area116therebetween. The opening in front of the game cartridge grid301allows the game board302to be inserted therein. As such, when the game board302is withdrawn from the game cartridge grid301, each game tile may fall below the grid on the game cartridge grid301and into the trap area116. In the exemplary embodiment, the game cartridge grid301sits inside and swivel with the game cartridge carrier114. The game cartridge carrier114sits atop and is permanently attached to the mechanical swivel device110, which is in turn permanently attached to the top of the game board play base112. The swivel device is located approximately at center of the game cartridge carrier114and the game board play base112. FIG.3cillustrates the multicolor game board302inserted into the game cartridge grid301.FIG.3dillustrates a black and white game board. As shown inFIG.1, the board game assembly100can include the game cartridge grid301with the game board302inserted therein for play. The board designation will become clear in the subsequent descriptions. As shown inFIG.2, the board game assembly100can include a board game play base112, which can be a flat particle-board approximately 12 inches square. The play base112functions as a supporting base for the game board assembly100. The lower surface200includes number piece holding-cells grid210, one for each player, that are placed around the section aligning with each edge of the board game play base112. The number piece holding-cells grid210include seven 1-inch square spaces arranged on the game board in a rectangular row surrounded by a raised 1/16 square inch wooden or plastic support grid. The number piece holding-cells210are designed to temporarily retain a player's game board number pieces (GNPs) as the game is played. Each of the grids210includes a plurality of cell spaces that have one of the letters D, I, A, G, N, U, M. printed thereon. The GNPs in holding are played “out-of-holding” onto the game board in a sequence of last-in-first-out. After a player withdraws a GNP from the Shaker bag, or when a player's GNP has been returned to them for a “Play-Move” violation”, it is placed into the player's number piece holding-cell starting in the far-left D-cell position. Referring toFIG.2, each player's designated letter position (DLP)212has one of the letters “D”, “I”, “A”, and “G” printed thereon, so that all four letters appear on the four sides of the board. Each of the DLPs212provides a scorekeeper with the ability to keep track of the player that makes a Game-Call. As the players play the board game, they assume the designated letter at that corresponds to their sitting position on the board. The specific letter at their sitting position becomes the player's letter designated identifier to the scorekeeper. The lower surface200includes a plurality of time spot positions214. Each time spot position214is located to the right side of one of the number piece holding-cell grids210. The time spot position214is used to place a thirty-second sand timer during a player's turn. A receptacle216for receiving the game cartridge grid301is located between the holding-cell grids210. As shown inFIGS.3a-3cthe game board cartridge grid301is designed to support the inserted diagonal number game boards302for game play. The game boards302can be multicolor or black and white with each game board302being assigned a number. Each game board cartridge grid301is a 7-inch square by 1-inch square box frame supporting a 7-inch square, 3-dementional wood or plastic grid. The grid is constructed using. approximately 1/16 square-inches by 1/16 square-inch wood or plastic materials crisscrossed vertically and horizontally forming forty-nine 1-inch squares with hallow spaces. The game board cartridge grid301creates a raised 3-dementional border surrounding the printed gridlines on the diagonal number game board. the spaces within the raised grid are hollowed out to allow the GNPs to fall through into the trap area116once the diagonal number game boards302are removed. As shown inFIG.1c, the trap area116is an open space underneath the game board cartridge grid301with a depth deep enough (approximately 1-inch) to catch and collect all the GNPs that falls through. The printed gridlines on the diagonal number game board302once inserted into the game board cartridge grid301aligns perfectly with the game board cartridge grid301's 3-d raised grid. After the game board302is inserted into the game board cartridge grid301the top surface of the game board302blocks the hollow spaces of the game board cartridge grid301, creating the playing surface with a raised grid border around each space on the game board302. The raised game board cartridge grid301grid keeps the GNPs in place and aids the players in the GNP in/out manipulations. After the end of each game the game board cartridge grid301can be lifted off of the game cartridge carrier114. The game board302is removed and the GNPs falls down through the hallow spaces into the trap area116. The game board cartridge grid301is then tilted to either side and dumped of the GNPs into a shaker bag. FIG.3billustrates the game board302as an exemplary multicolor, diagonal number game board identified with the number 6.FIG.3dillustrates another exemplary game board303that is black and white game board identified with the number 6. The game boards302-303can be displayed simultaneously and can be inserted into the game board cartridge grid301. There are forty-nine square spaces304on each game board with forty spaces that contain indicia305in the form of numbers thereon. Four of each number 0-9 are printed in different GSP within the grid and arranged so that no two of the same number is printed in the same rows across, down, or diagonally. Also, the mathematical differences between all adjacent numbers in all rows equals two or greater. There are forty spaces on each game board that are considered usable and nine spaces that are considered unusable. The 1st, 3rd, 5thand 7throws and columns consist of all useable spaces. The 2nd, 4th, and 6throws and columns are all alternating usable and unusable spaces with the first space being usable. GNPs are not to be played in the unusable spaces. Each usable and unusable space can be identified on the game board by where their GSP #s are located on the game board. Referring now toFIGS.4-8, various components of game that can be used with the game board assembly100shown inFIGS.1a-2are shown.FIG.4illustrates the pyramid number piece (PNP)400. PNPs400are stand-alone game number pieces approximately ¾-inchs square ideally designed for the game. There are forty PNPs400used in the game. Four of each PNP400is printed on all four sides with one number 0-9. When set atop a matching number on the game board allows a front-face-view of the numbers on the game board simultaneously to all players regardless of their sitting placement around the board game. Flat laying GNPs can be used in the game but the stand-alone PNPs are ideally preferred so to off-set the visual obscurities of the printed numbers, or flat lying tiles on the game board. The pyramid number pieces (PNP), game number pieces (GNP), playing tiles, and tiles are equivalent, functionally, in the exemplary embodiments. For illustrative purposes, the term PNP(s) will now be used instead of GNP in the following description. FIG.5illustrates the Shaker bag500. It is a cloth bag with a tie string and is used by the players to withdraw the PNP's out for game play and to store them away at the end of the game. The PNP's can be the PNP400as illustrated inFIG.4. The shaker bag500can receive a plurality of game tiles, such as game tile510. FIG.6illustrates the 30 second sand timer600, referred to also as the timer600. The timer600is an hourglass 30 second sand timer that allows all players around the game board to see the actual passing of their playing time more easily. It is placed at each player's “Time Spot Positions” located to the right side of each player's DLP Number Piece Holding-Cell. It is used only after a player withdraws a PNP from the Shaker bag to play. When the timer600is turned onto its side the timer600is considered paused. The scorekeeper can be an independent person or a participating player. The scorekeeper must maintain a visual sight on the timer as being the person with the final word on the player's expiration time to continue play. The player's time to continue playing is considered expired when the sand runs out of the timer. If the scorekeeper is an independent person the scorekeeper calls out the next player's time to play in the clockwise direction by calling out for example, “D-Time”. The player then passes the timer to Player “D” placing the timer into Player “D” Time-Spot position. If the scorekeeper is one of the participating players, each player including the scorekeeper calls in their own playing time after the previous player's time has expired. After a player withdraw a PNP from the Shaker bag and placing it in their Holding-Cell the player begins their own start times to play by flipping the timer over. The player then has 30 seconds to make a “Game-Call.”. When the time expires for the current player the next player-in-turn calls their time to play. For example, if Player “I” is the current player and their time had expired, Player “A” would call out “A-Time”. If a dispute occurs between players, the scorekeeper can pause the timer by placing the timer onto its side until the dispute has been settled. FIG.7illustrates the “Scoreboard Twin Scoring Pins”, generally designated by the numeral700. The pins700are used to mark each player's game points on the Quad-Play DIAGNUM Scoreboard. The scorekeeper controls the players Twin Scoring-Pins. Two of each Twin Scoring-Pins700are printed on all four sides with one of each letter D, I, A, G associated with each player's DLP's. One Twin Scoring-Pin is used to mark the one's columns 0-9 and the 2ndTwin Scoring-Pin is used to mark the ten's columns 10-90. The scorekeeper places each player's Twin Scoring-Pins700at the “0” Start line on the Quad-Play DIAGNUM Scoreboard at the beginning of the game. The scorekeeper advances each player's scores from 0-99 points by marking their scores with their associated DLP Twin Scoring Pins700. FIG.8illustrates the Quad-Play DIAGNUM Scoreboard800, a tabletop multiplayer gaming scoreboard designed to track the game scores for up to four players at once, eliminating the need for pencil and paper. The scoreboard800is printed on the top, center, and bottom rows with the letters D-I-A-G, associated with each sitting player's DLP's around the game board800. The left and right margins on the scoreboard800are the point columns numerated with two separate rows of numbers representing the ones columns 0-9 and the ten's columns 10-90. The columns are separated in the middle row of the scoreboard by the “0” START row that represents the starting point where all player's Twin Soring Pins are placed at the beginning of each game. The Quad-Play DIAGNUM Scoreboard is controlled by the scorekeeper. As each player scores points the scorekeeper advances each player scores on the Quad-Play DIAGNUM Scoreboard under their associated DLPs. Referring now toFIG.9a-9t, an exemplary set of two-sided game boards are illustrated. Each game board within the set is identified by the first number 0-9 printed in the top far left corner of the row. The printed number pattern is arranged differently on each game board to offer the players with different playing experiences. The Diagonal Number Game board grids are designed having the same seven 1-inch square gridline dimensions as the Game board Cartridge grid except the gridlines on the game boards are printed. The spaces within the grid are colored in using a “Multicolor” pattern on one side and a “Black and White” pattern on the reverse. The numbers that are printed within the spaces on the game board are printed in the same “Grid Space Positions” (GSPs) on both sides. FIG.9aillustrates a black and white game board designated as the #0 black and white game board.FIG.9billustrates a multicolor game board designated as the #0 multicolor game board.FIG.9cillustrates a black and white game board designated as the #1 black and white game board.FIG.9dillustrates a multicolor game board designated as the #1 multicolor game board. FIG.9eillustrates a black and white game board designated as the #2 black and white game board.FIG.9fillustrates a multicolor game board designated as the #2 multicolor game board.FIG.9gillustrates a black and white game board designated as the #3 black and white game board.FIG.9hillustrates a multicolor game board designated as the #3 multicolor game board. FIG.9iillustrates a black and white game board designated as the #4 black and white game board.FIG.9jillustrates a multicolor game board designated as the #4 multicolor game board.FIG.9killustrates a black and white game board designated as the #5 black and white game board.FIG.9lillustrates a multicolor game board designated as the #5 multicolor game board. FIG.9millustrates a black and white game board designated as the #6 black and white game board.FIG.9nillustrates a multicolor game board designated as the #6 multicolor game board.FIG.90illustrates a black and white game board designated as the #7 black and white game board.FIG.9pillustrates a multicolor game board designated as the #7 multicolor game board. FIG.9qillustrates a black and white game board designated as the #8 black and white game board.FIG.9rillustrates a multicolor game board designated as the #8 multicolor game board.FIG.9sillustrates a black and white game board designated as the #9 black and white game board.FIG.9tillustrates a multicolor game board designated as the #9 multicolor game board. Referring now toFIGS.10-11f, an embodiment of a game board, generally designated by the numeral1000, is shown. In this exemplary embodiment, the placement of each player's first withdrawn PNP's placed into their perspective Number Piece Holding-Cells after the draw for first play onto the game board1000. A pyramid number piece (PNP)1001is placed on the first location of each player's holding cell, in this exemplary embodiment designated with the letter “D”. The players will attempt to place PNP1001onto the respective cells on the game board1001. A sequence of play and game call is further explained below. As shown inFIG.11a, numbers referencing the “Grid Space Positions” (GSPs) are printed in bold italic. Numbers referencing “Pyramid Number Pieces” (PNPs) are printed in straight print.FIG.11aillustrates the numeration of the Diagonal Number Game board forty-nine GSPs within the game board grid. The first space within the grid is located in the top far left row and is referred to as Grid Space Position #1 (GSP #1). The last space in the top far right row is considered GSP #7. The second row contains GSP #'s 8-14. The third row contains #'s 15-21, 4throw #'s 22-28, 5throw #'s 29-35, 6throw #'s 36-42, 7throw #'s 43-49. Depending on which GSP the numbers are printed in determines if the PNP played atop that GSP would be worth any game points to the players. In various embodiments, the diagonal number game boards are designed with varying color patterns to further provide variety in gameplay and difficulty. The Multicolor side of the Diagonal Number Game boards offers the players with a visual advantage in deciding what GSPs provides a greater chance for scoring game points. The Black and White side of the Diagonal Number Game board erases this visual advantage. There are four colors used in each game board. The first color is black in an exemplary embodiment. Black spaces identify the unusable spaces on each game board. Unusable spaces are located in the same GSPs on both sides of all ten game boards. The nine unusable GSPs are #9, 11, 13, 23, 25, 27, 37, 39 and 41. The 2ndcolor is white in an exemplary embodiment. White spaces are considered usable. There are forty white spaces on the black and white side of each game board and sixteen white spaces on the multicolor side. The white GSPs #1, 3, 5, 7, 15, 17, 19, 21, 29, 31, 33, 35, 43, 45, 47 and 49 on both sides of the game board are considered “Dead Number Spaces”. PNPs played into these GSPs only rids the number pieces for play and do not offer the player any scoring points. Players must become keen as to which WHITE GSPs offers scoring points and which ones are Dead Number Spaces. The 3rdcolor is tan in an exemplary embodiment. Tan color spaces make possible for a double-diagonal connection between adjacent GNPs in the diagonal rows. There are twelve tan GSP #'s 2, 4, 6, 8, 14, 22, 28, 36, 42, 44, 46, 48. The same GSP #s on the black and white side of the game board offers the same diagonal connection points. The 4thcolor is blue in an exemplary embodiment. Blue color spaces make possible for a diagonal connection between up to four adjacent diagonal spaces using two diagonal rows at the same time. There are twelve blue GSP #'s 10, 12, 16, 18, 20, 24, 26, 30, 32, 34, 38, 40. The same GSP #s on the black and white side of the game board offers the same diagonal connection points. When viewing the printed numbers on the game board from a front-face-view, the numbers face in one direction. As the direction of the game board changes during the swiveling of the Mechanical Swivel Device110, shown inFIGS.1a-1b, the numbers facing the player can appear visually obscured. For example: When facing the number “9” in a front-face-view on the game board, it would appear up-side-down as the number “6” to the player on the opposite side of the game board, or the #2 can appear as the #5. Players with side-face-views can also find some difficulty in visualizing the correct printed numbers. Normally, printed alphanumeric game board pieces are created as flat laying game tiles that can, at times, present the same visual obscurities on the game board as described above. Ideally, numbering on the game board should allow for a front-face-view of the numbers simultaneously to all players regardless of their sitting orientation around the game board. Using a matching PNP placed atop the printed number on the game board would offset this visual obscurity for the players. The disclosed game uses a set of rules: (1) Using single digit subtraction between adjacent diagonal connecting number pieces in all diagonal rows, and (2) Adding their combined differences together to score game points to win. The first player who reaches the DIAGNUM score of 99 game points win the game. Player's game scores that go backwards less than “0” points are eliminated. There are five Game-Calls possible to be made by each player during their Turn-In-Play. (1) PASS up their turn: (2) Make a PLAY-Move to rid the PNP: (3) Make a CATCH-Call for a Play-Move or Scoring-Call Violation: (4) Make a DIAG or DIAGNUM Scoring-Call for game points: (5) RECOVER a PNP Play-Move Violation. Players must be exact when calling out their perceived scoring points. Players must first calculate their score then use their DLP in front of any Game-Calls made. Incorrect Game-Calls made by the players are subjected to a “Catch-Call” violation and can result in the loss of game points for the “Caught-Player”, or to reward game points to the “Catch-Player”. The Game-Call “PASS” is used by the players when they want to pass up their turn for a possible opportunity in scoring greater game points at their next Turn-In-Play. After a player withdraws a PNP from the Shaker bag and places it into their Number Piece Holding-Cell the scorekeeper starts the timer and the player's time to make a Game-Call starts. The player can use their 30 second time period to decide what Game-Call option they can want to use. Players must call out their DLP first before making any Game-Calls. If a player decides to pass up their turn they make the Game-Call for example, “D-PASS turn”, or just “D-PASS”. The scorekeeper will know Player “D” made the Game-Call “PASS” and who is next in turn. When a player calls “PASS” their playing time expires immediately regardless of how much time is left on the timer. The turn then moves to the next player in the clockwise direction. The Game-Call “PLAY” is called when a player wants to get rid of a PNP from out their Number Piece Holding-Cell or to play a withdrawn PNP from the Shaker bag onto the game board. If for example, Player “I” is making a Play-Move with the #5 PNP Out-of-Holding, the timer is not used. Player “I” can take a reasonable amount of time (10 seconds) to make a Game-Call. Player “I” can place the #5 PNP atop any one of the four printed #5's on the game board. After a decision is made and Player “I” places their #5 PNP atop their selection, Player “I” makes the Game-Call “I-PLAY-5. When a player makes a Play-Move Out-of-Holding their playing-time expires immediately. The scorekeeper will know Player “I” called a “Play-Move” and who is next in turn. The turn then moves to the next player in the clockwise direction. When a player withdraws a PNP from the Shaker-Bag and places it into their Number Piece Holding-Cell the timer is used and the scorekeeper starts the player's time to make a Game-Call. The player can use their 30 second time period to decide what Game-Call option they want to use. Rather a player makes a PNP Play-Move Out-of-Holding or from a withdrawn PNP from the Shaker bag, no scoring points are possible to be made. The loss of game points is possible if the Play-Move is called incorrectly. The Game-Call “CATCH” is called by the current player “Catch-Player” when they want to call out the previous player “Caught-Player” Play-Move or Scoring-Call Violations. Catch-Calls are made by the Catch-Player immediately after the Caught-Player makes a Play-Move or Scoring-Call Out-of-Holding, or after a Play-Move or Scoring-Call is made from a withdrawn PNP played out of the Shaker bag once the current players playing time had expired. Catch-Calls cannot be made by the Catch-Player on a Caught-Player if they make a Game-Call first, make a Play-Move Out of Holding, or withdraw a PNP from the Shaker bag. If a Catch-Call is made the scorekeeper will determine the validity of the catch and reward each player with their perspective scoring points. If a Catch-Call is made at the same time by multiple players, the player closest to the Caught-Player in the clockwise direction has priority in making the Catch-Call and receiving additional scoring points. The Catch-Player do not lose their turn if next in play. There are four each number 0-9 printed and arranged in four different spaces on each game board. No two game boards are alike. The printed numbers could exist in dead number spaces or in multiple multicolor diagonal scoring spaces. Placing PNPs atop the unusable (Black) spaces or numbers other than itself are subject to “Play-Move Violations”. For example, if player “A” places the #9 PNP atop the #6 PNP and calls “A-PLAY-9”, Player “G” could make a Catch-Call on Player “A” for making a Play-Move Violation. Player “G” would call “G-CATCH-A-PLAY-9”. Player “G” is the “Catch-Player” and Player “A” is the “Caught-Player”. In this instance the #9 PNP is returned to the Caught-Player “A” and placed in their Holding-Cell and 9 game points are subtracted from their game score. The Catch-Player “G” would be rewarded the same 9 game points. If the #0 PNP is played atop any other number than itself or on an unusable black space and caught by another player, the Caught-Player is subtracted 10 game points and the Catch-Player is rewarded 10 game points. The #0 PNP is returned to the Shaker bag. Players cannot make DIAG or DIAGNUM Scoring Calls with adjacent connecting PNPs in the dead-number spaces. Scoring-Calls made using these spaces would be considered Play-Move Violations if caught. Also, if a player makes an adjacent diagonal connection and call “PLAY” instead of making a DIAG or DIAGNUM Scoring-Call, the Play-Move will stand once time had expired. Players can change their Game-Calls at any time before their 30 second time period expires. The last Game-Call made by a player stand once their 30 second time period expires. The Game-Calls “DIAG or DIAGNUM” are used when a player wants to score game points. To make a score players must first place their PNP onto the game board adjacent to another PNP sitting in the diagonal row. The player must first calculate their perceived scoring points by using subtraction between all diagonal connecting PNPs and then use their DLP in front of their perceive scores before making any Game-Calls. When multiple diagonal connections are made the combined differences between diagonal connecting PNPs are added first before making a Game-Call. As shown inFIG.11d, if player “G” played the #5 PNP diagonally adjacent to the #3 PNP, Player “G” would make the Scoring-Call, “G-DIAG-2”, the mathematical difference between the numbers (5 and 3=2). Player “G” would receive 2 Scoring Points. The Scoring-Call would be the same if Player “G” played the #3 PNP diagonally adjacent to the #5 PNP. As shown inFIG.11e, when the #0 PNP is Played Diagonally Adjacent to any other #PNP the player receives the scoring point difference between the #0 PNP played and the #PNP played against. For example, if Player “G” played the #0 PNP diagonally adjacent to the #3 PNP, Player “G” would make the Scoring-Call “G-DIAG-3” and receive 3 scoring points. When any #PNP is played diagonally adjacent to the #0 PNP the player has a DIAG or DIAGNUM scoring choice. It is possible for the players to make 1-4 diagonal row connections at the same time to score game points. Players must place their PNP onto the game board making connections with other adjacent connecting PNPs in the diagonal rows. The player first subtracts each of their differences separately then add their combined differences together that gives the total scoring points for the player to call. Alternatively, DIGANUM scoring can be utilized. During most turns, the number printed on the PNP played onto the game board will not equal the same number of scoring points made as previously illustrated. However, in certain turns, the player's scoring points equal the number printed on the PNP played. When this happens, players are rewarded double scoring points for the #PNP played. As shown inFIG.11f, if Player “D” played the #6 PNP adjacent to two #3 PNPs in different diagonal rows, the added differences between all diagonal connecting PNP's would equal 6, the same number as the #6 PNP played. Player “D” has a choice as to which Scoring-Call to make. Player “D” can make the Scoring-Call, “D-DIAG-6”, and receive 6 scoring points. Player “D” could also make the Scoring-Call “D-DIAGNUM-12 and receive 12 points (double the #6 PNP played). By making the Game-Call DIAGNUM instead of DIAG earns the player double scoring points. Both Game Scoring-Calls are valid. If Player “D” calls “D-Diag-3” it would be an Under-Call scoring violation. If a player makes a diagonal connection to score points and call “Play” instead of DIAG or DIAGNUM, the “Play-Move” made will stand. No points would be scored once the time period to change the call had expired. Players should not over/under call their perceived scoring points. Overcalling scoring points by players is a violation subjected to have their game scores decreased if caught. Under calling points enables other players scores to increase. For example, if Player “D” makes the Scoring-Call “D-DIAG-3” and the possible points was 6, the next player in turn can call out Player “D” Under-Call scoring point violation after the time had expired before Player “D” corrects the Scoring-Call. The Catch-Call would be “I-CATCH-D-DIAG-3”. The Caught-Player “D” would be rewarded 3 scoring points out of the 6 possible points and the Catch-Player “I” would receive the extra 3 under called points. If Player “D” overcall scoring points for example, “D-DIAG-14” and the possible points to be made was only 4, the overcalled difference is 10 scoring points. If caught by Player “I”, Player “I” would not receive any scoring points however, 10 game points would be subtracted from Player “D” game score. If Player “D” game score goes backwards less than “0” points Player “D” would be eliminated from the game. For example, if Player “D” game score is currently 5 points and makes the Scoring-Call “D-DIAG 14” and the possible points to be scored was 4, 10 overcalled scoring points would be subtracted from Player “D” 5 game point score and Player “D” score would go backwards less than 0 and be eliminated from the game. “Recover-Calls” are used at times when a PNP is discovered sitting on the game board atop a different number than itself after a full round of play, or found sitting in a black (unusable) space and considered a Play-Move Violation. Recovery calls can be made by any player in turn only after a complete Play-Round to them have been made. The Game-Call for recovering a misplaced PNP is for example, “I-RECOVER-PLAY-3”. The #3 PNP is recovered off the game board and returned to the Shaker bag. The Recovery-Player “I” is rewarded 3 scoring points, the number printed on the PNP. When the #0 PNP is recovered the Recovery-Player receives 10 scoring points. It would be considered a Play-Move Violation when any two of the same number PNPs are found on the game board in the same row in any direction. With each game board the mathematical differences between all adjacent numbers in all rows equals two or greater without repeating any number in the rows. By comparing the printed number on the game board against the PNP number played atop, players can easily identify which PNP played is incorrect. In reference with the preceding overview of the game rules, a description of play is provided herein to illustrate typical play sequences. This description of play will illustrate and demonstrate a fictitious game playing round with the Diagonal Number Board game using four players. It is assumed the four players are, Players “D, I, A. and G” assembled around the Diagonal Number Board game, and the scorekeeper is an independent participant. It is further assumed the multicolor-side of the #6 game board was selected for play and all forty PNPs are accounted for and placed into the Shaker bag. The 30 second sand timer sits at rest with the scorekeeper and all player's Twin Scoring Pins are positioned at the “0” START line on the Quad-Person DIAGNUM Scoreboard. The game is initialized by each player D, I, A, and G withdrawing a single game board PNP from the Shaker bag and placing it into each own DLP Holding-Cell, far left #1 D-Number-Cell position. This is to see which player withdraws the lowest PNP for the chance to play first onto the Diagonal Number Game board. If a tie occurs, the tie players retain their withdrawn PNP in their perspective Holding-Cell and withdraws a new PNP from the Shaker-Bag. The selection of new PNPs continues until one player has the lowest PNP number. The draw goes as follows. Player “D” withdraws the #0 PNP; Player “I” withdraws the #1 PNP; Player “A” withdraws the #5 PNP; and Player “G” withdraws the #3 PNP. Player “D” winning the draw for first play onto the game board must consider where on the game board to play their #0 PNP. The #0 is printed in four different spaces throughout the game board, GSP #'s 22, 28, 31, and 44. Player “D” has three Game-Call options to either: (1) PASS up their turn: (2) Make a Play-Move Out-of-Holding to rid the PNP: (3) Withdraw a new PNP from the Shaker bag. The timer is not used until a player withdraws a PNP from the Shaker bag and places it into their Number Piece Holding-Cells. Players can take a reasonable amount of time (10 seconds) to make a Game-Call. Player “D” decides to pass up their turn by making the Game-Call, “D-PASS”. Player “D” #0 PNP is retained in their Number Piece Holding-Cell until their next turn in play. Because the timer is not used when playing Out-of-Holding the players Game-Call stands and cannot be changed. The scorekeeper then calls, “I-Time” and the turn moves to Player “I”. Player “I” has the #1 PNP retained in their Number Piece Holding-Cell with having the same three Game-Call options as Player “D”. Player “I” can take a reasonable amount of time (10 seconds) to evaluate the previous player's Game-Call before making a Game-Call or withdrawing a PNP from the Shaker bag. As shown inFIG.11a, the numeration of the Diagonal Number Game board forty-nine Grid Space Position numbers (GSP #s). As shown inFIG.11b, Player “I” can recognize the #1 is located in one white “dead-number-space” GSP #33, one blue “quad-connection space” GSP #18, and two tan “double-connecting spaces” GSP #4 and #8. Player “I” then decides to play Out-of-Holding rather than passing their turn or withdrawing another PNP and places their #1 PNP atop the tan space GSP #8 and make the Play-Move, “I-PLAY-1”. The scorekeeper calls “A-Time”, and the turn moves to Player “A”. Player “A” has the #5 PNP retained in their Number Piece Holding-Cell with having the same three playing options as previously described above plus the option to make a Catch-Call on Player “I”. Player “A” can take a reasonable amount of time (10 seconds) to evaluate Player “I” Play-Move. In recognizing the #5 is located at two white spaces GSP #21 and #29, and one blue space GSP #10, and one tan space GSP #48, and no Catch Call is possible, Player “A” decides to play Out-of-Holding rather than passing their turn or withdrawing another PNP. Player “A” places their #5 PNP atop the blue space GSP #10 and makes the Game-Call, A-PLAY-5. The scorekeeper calls, “G-Time. The turn now moves to the next Player “G. Player “G” has the #3 PNPs in their Number Piece Holding-Cell and sees the #3 on the game board is located in the tan space GSP #2 two blue spaces GSP #16 and #30, and one white space GSP #47. Player “G” places their #3 PNP atop the tan space GSP #2 and makes a diagonal connection with the #1 PNP at GSP #8 and the #5 PNP at GSP #10. Player “G” then makes the Scoring-Call, “G-DIAG-4”, the mathematical differences between all diagonal connecting PNPs added together (3-1)+(5-3)=4. If no Catch-Call was made the scorekeeper advances Player “G” 4 scoring points then calls “D-Time”. The turn moves back to Player “D”. As shown inFIG.11c, Player “D” still has the #0 PNP retained in their Number Piece Holding-Cell from their previous turn. Having four of five playing options available and no PNPs to recover, Player “D” decides to play Out-of-Holding and places their #0 PNP atop the #0 at the tan spaces GSP #22 and make the Game-Call, “D-PLAY-0” The scorekeeper calls the next player in turn, “I-Time”. The turn then moves to Player “I”. Player “I” not having any PNPs currently in their Number Piece Holding-Cell must withdraw a PNP from the Shaker bag, or make a Catch-Call on Player “D” Play-Move, or pass up their turn. Player “I” withdraws the #3 PNP and places it into their Number Piece Holding-Cell and the scorekeeper starts the timer. Player “I” must make a Game-Call within the 30 second time period or the #3 PNP is retained in their Number Piece Holding-Cell. Player “I” finds the #3 at the white space GSP #47 and two blue spaces GSP #16 and #30. Player “I” decides to place their #3 PNP atop the #3 in the blue space GSP #16 and make the Scoring-Call I-DIAG-7, the combined differences added together between all three diagonally connected PNPs, (3−1)+(5−3)+(3−0)=7. The scorekeeper waits until the timer expires before calling in the next player in turn. This allows the current Player “I” time to change their Scoring-Call and the other players time to examine the Scoring-Call made to make a Catch-Call for a Scoring-Call or Play-Move violation. Player “I” can change their Scoring-Call as long as the 30 second sand timer is running. When the 30 second sand timer runs out of sand the player's time expires and the scorekeeper calls in the next player in turn. If no Catch-Call is made the scorekeeper advances Player “I” game score 7 scoring points and calls “A-Time”. The turn moves to Player “A”. Player “A” not having any PNPs currently in their Number Piece Holding-Cell must withdraw a PNP from the Shaker bag, or make a Catch-Call on Player “I” Scoring-Call, or pass up their turn. Player “A” withdraws the #1 PNP and places it into their Number Piece Holding-Cell and the scorekeeper starts the timer. Player “A” must make a Game-Call within the 30 second time period or the PNP is retained in their Number Piece Holding-Cell. Player “A” places their #1 PNP atop the #1 tan space GSP #4 and makes a diagonal connection with the #5 PNP at GSP #10. Player “A” makes the Scoring-Call, “A-DIAG-4”. After the time has expired for Player “A” and no Catch-Calls were made, the scorekeeper advances Player “A” game score 4 scoring points and calls, “G-Time”. Player “G” not having any PNPs currently in their Number Piece Holding-Cell must withdraw a PNP from the Shaker bag, make a Catch-Call on Player “A” Scoring-Call, or pass their turn. Player “G” withdraws the #6 PNP and places it into their Number Piece Holding-Cell and the scorekeeper starts the timer. Player “G” places their #6 PNP atop the #6 blue space at GSP #24 and makes a diagonal connection with the #3 PNP at the blue space GSP #16. Player “G” makes the Scoring-Call “G-DIAG-3” and receives 3 scoring points. The turn loops back around to Player “D”. Player's Turn-In-Play continues to move in the clockwise direction around the board game. When the last of all forty PNPs are withdrawn from the Shaker bag and played onto the game board, the “Game-Round” ends regardless how many PNPs are retained in each player's Number Piece Holding-Cell. The next Player-In-Turn cannot make any Play-Moves or Scoring-Calls. At the end of each Game-Round the scorekeeper decides if a player has reached the 99 game points to win the game. If no player has reached 99 game points, the sum total of all remaining PNPs left in each player's Holding-Cell is subtracted from each player's total game scores and the PNPs are returned to the Shaker bag. The “#0” PNP has no point value when counting. A new Game-Round must be started and repeated until a winner has been determined. Each new Game-Round starts with the draw for first play onto the game board. After a player is declared the winner of the game, the Diagonal Number Game board is removed from the Game board Cartridge and dumped of the PNPs into the Shaker bag for storage. Supported Features and Embodiments The detailed description provided above in connection with the appended drawings explicitly describes and supports various features of a mechanical-mathematical diagonal number board game with 2-sided interchangeable game boards. By way of illustration and not limitation, supported embodiments include a system for a mechanical-mathematical diagonal number board game comprising: a game timer; a game board cartridge; a plurality of game boards for inserting into the game board cartridge; a play base having a plurality of grids and plurality of timer spots for receiving the game timer with each grid having a plurality of holding cells for receiving one of the plurality of numbered game tiles; a game board cartridge carrier for removeably holding the game board cartridge with the game board cartridge receiving at least one of the plurality of game boards; and a plurality of numbered game tiles with each of the plurality of tiles having playing tile indicia corresponding to an integer selected within the range of 0 to 9; wherein the game board cartridge is positioned over the play base and can be rotated in relation thereto; wherein each of the plurality of game boards is marked with a game board grid having a plurality of rows and columns forming a plurality of squares, with each of the plurality of squares having a game board indicia corresponding to an integer selected within the range of 0 to 9, and with none of the integers repeating within the same row, same column, or with respect to a connecting diagonal square; and wherein the plurality of tiles are drawn during game play, so that a player can make a diagonal connection to score points by matching the playing tile indicia with the game board indicia during a predetermined time period measured by the game timer. Supported embodiments include the foregoing system, further comprising: a shaker bag for storing the plurality of playing tiles. Supported embodiments include any of the foregoing systems, further comprising: a score board; and a pair of score pins consisting of a first score pin and a second score pin; wherein the first score pin is configured to track a score of 0-9, and the second score pin is configured to track a score of 10-90. Supported embodiments include any of the foregoing systems, wherein the plurality of game boards includes ten game boards. Supported embodiments include any of the foregoing systems, wherein each of the plurality of game tiles is a pyramid-shaped game piece. Supported embodiments include any of the foregoing systems, wherein the game board grid includes an arrangement of squares formed in seven rows and seven columns. Supported embodiments include any of the foregoing systems, wherein the game board grid include a plurality of unused squares. Supported embodiments include any of the foregoing systems, wherein the plurality of game boards includes multicolor game boards, black and white game boards, or both types of game boards. Supported embodiments include any of the foregoing systems, wherein the game timer in hourglass-shaped container having a sufficient amount of sand contained therein for a predetermined period of thirty seconds. Supported embodiments include any of the foregoing systems, wherein the game board cartridge includes a cartridge grid and the game board cartridge carrier is configured to allow the plurality of game tiles to fall through the cartridge grid. Supported embodiments include a method of playing a mechanical-mathematical diagonal number board game, comprising: orienting one of four players on a side of a square shaped board game base; prompting each of the four players to draw one of at least 40 game tiles from a shaker bag with each of the game tiles having a number selected from a group of numbers from 0 to 9 on a game board and being configured to insert into a game board cartridge mounted on a mechanical swivel device located at center of the square shaped board game base; and making at least one game action selected from a group consisting of a play-move action, a score-call action, a catching-call action, and a passing action; wherein the game board is marked with a 7 square by 7 square grid, wherein each of the 7 square by 7 square grid aligns with the game board cartridge grid and is marked with a number selected from a group of numbers from 0 to 9, wherein the number does not repeat in another grid on a same row, column, or diagonal on the 7 square by 7 square grid; wherein the play-move action includes placing one of the at least 40 game tiles on the game board; wherein the score-call action includes announcing a score point with the score point including a sum of differences of numbers between at least two of the game tiles connected diagonally on the game board cartridge; wherein the catch-call action includes identifying any differences between the score point and the sum of differences of numbers between at least two of the game tiles connected diagonally on the game board cartridge, made by another player during a previous score-call actions, and accumulating the differences on the score point; wherein the passing action includes taking no action; and wherein the score points of each of the four players is compared after all of the at least 40 game tiles are withdrawn from the shaker bag. Supported embodiments include the foregoing method, wherein one of the game actions include a play-call action in which a player conducts play-move and score-call in a same round, wherein a pyramid number piece placed results in a score point. Supported embodiments include any of the foregoing methods, wherein one of the game actions includes keeping track of the score points on a score board and at least a first score pin and a second score pin, wherein the first score pin is configured to track a score of 0-9, and the second score pin is configured to track a score of 10-90. Supported embodiments include any of the foregoing methods, wherein the game board is one of the ten game board, each of the game board is designated by a number selected from 0 to 9 on the square that is on the top left corner of the 7 square by 7 square grid. Supported embodiments include any of the foregoing methods, wherein the game boards include at least one of a multicolor game board and a black and white game board. Supported embodiments include any of the foregoing methods, wherein one of the game actions includes keeping track of time available for a game call on a timer with the timer having an hourglass configured to keep track of 30 seconds. Supported embodiments include any of the foregoing methods, wherein the game board cartridge comprises a 7 square by 7 square grid that is marked from 1 to 49. Supported embodiments include any of the foregoing methods, wherein one of the game actions includes collecting the at least 40 game tiles by removing the game board from the cartridge grid and allow the game tiles to fall through the game board grid onto the game board trap area. Supported embodiments include any of the foregoing methods, wherein one of the game actions includes subtracting a score point from a player during catch call in which the player announced a score point that is higher than the differences between the score point and the sum of differences of numbers between at least two of the game tiles connected diagonally on the game board. Supported embodiments include any of the foregoing methods, wherein one of the game actions includes further eliminating a player when the score point is below 0. Supported embodiments include an apparatus, a kit, and/or means for implementing the foregoing systems, methods, or a portion thereof. It is to be understood that the configurations and/or approaches described herein are exemplary in nature, and that the described embodiments, implementations and/or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific processes or methods described herein can represent one or more of any number of processing strategies. As such, various operations illustrated and/or described can be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes can be changed. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are presented as example forms of implementing the claims.
54,999
11857885
MODE FOR EMBODYING THE INVENTION FIG.1is a perspective view showing a cubic puzzle obtained by application of the present invention, andFIG.2is a perspective view showing a state in which the cubic puzzle shown inFIG.1is partially exploded. An illustrated cubic puzzle1belongs to a Rubik's cube capable of being changed into the shape of a cubic block. In the Rubik's cube of this type, there is a Rubik's cube called “n×n×n Rubik's cube” constituted such that each of six square outside faces forming the block shape has a grid-like pattern in n columns and n lines, wherein “n” represents an integer of not less than 2. Referring to the illustrated embodiment, a value of “n” is set to 3. The cubic puzzle1has a core member2arranged at a center side of the cubic puzzle and formed in a spherical shape, a plurality of pieces3arranged side by side so as to cover the whole or approximately whole front, back, right, left, upside and downside circumferences of the core member2, a support mechanism4for supporting the plurality of pieces3to the core member2in a rotatable manner, and a controller including an electric motor6(seeFIGS.5and6) being an actuator for rotationally driving the pieces3. The core member2, the pieces3and the support mechanism4constitute a puzzle body1a. Each piece3has one or more square facets8configuring a part of the six outside faces of the puzzle body1a(the cubic puzzle1) which is in a state of being changed into the block shape. More specifically, each of the above outside faces is constituted of n×n (3×3=9, in this embodiment) facets8in total. In other words, a grid-like pattern displayed on each outside face is obtained with the plurality of facets8arranged in grid-like array in n columns and n lines. The support mechanism4is constituted so as to support the corresponding pieces3to the core member2so that the pieces are rotatable centering around three rotation axes X, Y, Z being three virtual linear axes perpendicularly or approximately perpendicularly intersecting one another at the center of the core member2. Each of the rotation axes X, Y, Z is vertical to a pair of corresponding parallel outside faces of the puzzle body1awhich is in the state of being changed into the block shape. A plurality of rotating units9(three rotating units, in the illustrated embodiment) each including the plurality of pieces3(eight pieces, in the illustrated embodiment) arranged in a square annular shape circumferentially around any arbitrarily selected one (the rotation axis X, for instance) of the three rotation axes X, Y, Z are in side-by-side arrangement in the axial direction of the rotation axis X. The same grouping as the above is also applied to each of the two rotation axes Y, Z other than the rotation axis X. Then, the rotating units9(nine rotating units, in this embodiment) as many as a value obtained by multiplying three or the number of the rotation axes X, Y, Z by the above value of “n” are provided to constitute one puzzle body1a. It is noted that the pieces3each positioned at a vertex portion where three outside faces are gathered resulting from changing the puzzle body1ainto the block shape form corner pieces3A each having three facets8configuring each part of the three outside faces. By the way, for the cubic puzzle1made up of a type of 2×2×2 Rubik's cube, all the pieces3having the facets8configuring the outside faces form the corner pieces3A. For the cubic puzzle1made up of a type of 3×3×3 Rubik's cube, the pieces3include not only the corner pieces3A but also edge pieces3B each positioned at a cross-sectionally L-shaped corner portion defined by the two outside faces formed resulting from changing the puzzle body1ainto the block shape and having two facets8configuring each part of the above two outside faces and center pieces3C each positioned at the center of each of the six outside faces formed resulting from changing the puzzle body1ainto the block shape and having only one facet8configuring each part of the above six outside faces. The center pieces3C are provided as many as the number of outside faces, that is, the six outside faces, wherein each center piece is positioned on any one of the rotation axes X, Y, Z and is supported to the core member2rotatably centering around the one rotation axis X, Y, Z. For the cubic puzzle1made up of a type of 4×4×4 Rubik's cube, a plurality of middle pieces are further included as the pieces3positioned on each of the six outside faces formed resulting from changing the puzzle body1ainto the block shape and having only one facet8configuring each part of the above six outside faces. There are provided four middle pieces3in total which are arranged in array in two columns and two lines at positions surrounded by the four corner and eight edge pieces3A and3B arranged in the square annular shape at each outside face side, wherein a grid-like pattern displayed on each outside face is obtained with the facets of the middle pieces together with those of the above corner and edge pieces3A and3B. Even the cubic puzzle1of this type has the center piece3C at every one of the six outside faces, wherein this center piece3C is arranged on any one of the rotation axes X, Y, Z in between the middle piece and the core member2without having any facet8. This center piece3C is supported to the core member2rotatably centering around the one rotation axis X, Y, Z. By the way, even for the cubic puzzle made up of the type of 2×2×2 Rubik's cube, the center piece3C having no facet8is provided inside of the cubic puzzle1. In generalizing the above, when the value of “n” is set to an odd number of not less than 3, the center piece3C having one facet8is arranged such that the facet8thereof is positioned on each outside face, wherein the each outside face is constituted of the facets8of four corner pieces3A, 4×(n−2) edge pieces3B, one center piece3C and ((n−2)×(n−2)−1) middle pieces. Meanwhile, when the value of “n” is set to an even number of not less than two, the center piece3C having no facet8is arranged inside of the puzzle body1a, wherein each outside face is constituted of the facets8of four corner pieces3A, 4×(n−2) edge pieces3B and (n−2)×(n−2) middle pieces. Hereinafter will be described a cubic puzzle constitution by taking the type of 3×3×3 Rubik's cube for instance. The puzzle body1a(the cubic puzzle1) is subjected to change into the block shape by the manner in which the turning positions of all the rotating units9arranged on any arbitrarily selected one (the rotation axis X, for instance) of the three rotation axes X, Y, Z are made to coincide or approximately coincide as viewed in the axial direction of the rotation axis X. When performing a pattern changing operation by which the puzzle body1ain a state of being changed into the block shape is subjected to re-change into the block shape by turning any selected one of all the rotating units9by quarter or approximately quarter turns relative to the other rotating units9arranged on the same rotation axis X, Y, Z as the rotation axis of the one rotating unit9, a combination of the facets8configuring each of the outside faces arranged in the annular shape circumferentially around the above same rotation axis X, Y, Z is changed. In other words, a combination of the plurality of pieces3constituting each of the other rotating units9arranged on the rotation axes Y, Z other than the one rotation axis (the rotation axis X, for instance) of the rotating unit9as a target of the pattern changing operation is changed at every pattern changing operation. Each facet8has a predetermined character, color or pattern, or alternatively, a combination thereof which is displayed in a fixed fashion thereon by direct printing, sticking of a sheet such as a seal printed with the above or like means so as to be allowed to differ in display contents on the four outside faces arranged in the annular shape circumferentially around the rotation axis X, Y, Z of the rotating unit9as the target of the pattern changing operation, at every pattern changing operation. Then, a display pattern including all the display contents on each of the six outside faces of the puzzle body1a(the cubic puzzle1) having been changed into the block shape is changed at every pattern changing operation. In this process, a predetermined display pattern is specified as a reference pattern. For instance, the display pattern specified as the reference pattern may also include a display pattern in which all the plurality of facets8positioned on the same outside face have the same or approximately same color, while colors of the six outside faces are mutually different. In this case, six mutually different colors are prepared, and the nine facets8are provided for every one of the six mutually different colors. Further, a display pattern in which the six outside faces have mutually different patterns may be also included. Then, the cubic puzzle1with the display pattern changed to the reference pattern is subjected to a plurality of times of pattern changing operations to change from the reference pattern to an arbitrary display pattern. The cubic puzzle1having been changed into the block shape with the arbitrary display pattern other than the reference pattern displayed thereon in this manner results in making only the display contents on one of the outside faces identical with the display contents on one outside face forming the reference pattern (for instance, one outside face is only constituted of the facets of the same color), and also in making the display contents on all the six outside faces identical with the reference pattern, so that it can be believed that this cubic puzzle is enjoyed in a way of performing a plurality of appropriate pattern changing operations and so on. Next will be described the constitution of each piece3with reference toFIGS.2and3. FIGS.3(A) and3(B)are respectively an exploded perspective view showing an outside face side of the center piece and a perspective view showing an inside face side thereof. The center piece3C has an engagement part11being a plate-like portion formed in a curved surface shape so as to conform to an outside face of the spherical core member2and arranged such that a center side of the engagement part is positioned on any one of the rotation axes X, Y, Z, and a body part12formed in a square shape as viewed in the axial direction of the one rotation axis X, Y, Z and integrally protruding stepwise outwards (in a direction away from the core member2) in the axial direction of the one rotation axis X, Y, Z from a portion close to the center of the engagement part11. The body part12has, in a range of its most part close to the center in an outside face of the body part12, a recess part13having a square shape as viewed in the axial direction of any one of the rotation axes X, Y, Z passing the center of the body part12, the recess part13being concaved inwardly in the axial direction of the one rotation axis X, Y, Z. The center piece3C has, on a flat bottom face of the recess part13, a receiving groove14concaved more deeply than the recess part13by a step. The receiving groove14in the center piece3C is formed in a shape (a cross shape in the illustrated embodiment) other than a circular shape such that crosses are formed on any one of the rotation axes X, Y, Z passing the center of the center piece3C and the center of the cross is positioned on the one rotation axis X, Y, Z as viewed in the axial direction of the one rotation axis X, Y, Z. The receiving groove14has, on its center part, an insertion hole16penetrating the center piece3C in the axial direction of any one of the rotation axes X, Y, Z passing the center of the center piece3C in which the receiving groove14is formed. The center piece3C further has a fixing member17and a square panel18fittingly fixed to the recess part13. The fixing member17has, as an integral unit, a cylindrical shaft part17ahaving a shaft center on any one of the rotation axes X, Y, Z so as to be fittingly insertable into the insertion hole16and a fitting part17bhaving the same shape (a cross shape in the illustrated embodiment) as the shape of the receiving groove14in the axial direction of the one rotation axis X, Y, Z so as to be fittingly receivable in the receiving groove14. The fixing member17is constituted so as to fittingly receive the whole of the fitting part17bin the receiving groove14such that the shaft part17ais inserted through the insertion hole16. In this state, a distal end of the shaft part17aof the fixing member17faces to an inside face side being a core member2-side face of the engagement part11. When detachably attaching and fixing the distal end of the shaft part17ato a drive shaft19having a shaft center coaxial with that of the shaft part17aand rotationally driven by the motor6, the center piece3C is supported to the core member2-side rotatably centering around any one of the rotation axes X, Y, Z. In other words, the center piece3C is rotated integrally with the drive shaft19wherein each rotation axis X, Y, Z is made the shaft center. By the way, when fittingly fixing the drive shaft19to an inner peripheral face side of the cylindrical shaft part17awith a screw15, the center piece3C is detachably attached and fixed to the drive shaft19. The panel18is fittingly fixed to the recess part13in a detachable manner in a state where the fitting part17bof the fixing member17is received in the receiving groove14while the shaft part17aof the fixing member17is inserted through the insertion hole16. An outside face of the panel18forms the facet8. Fixation of the panel18to the inside of the recess part13may be also by attracting the inside face of the panel18to a bottom face side of the recess part13with a magnet21installed on a bottom face of the recess part13, or alternatively, by fittingly fixing the panel18to the recess part13in the detachable manner. For fixation of the panel by means of attraction with the magnet21, the panel18is formed of metal or like material which is made attractable by the magnet21. Otherwise, the panel18is formed of a synthetic resin material. Further, an elastic member such as a compression spring22arranged between the fitting part17band the bottom face of the receiving groove14elastically energizes the engagement part11of the center piece3C and the other pieces3engaged with the engagement part11toward the core member2-side, thereby allowing integration of the center piece3C with the other pieces3constituting one rotating unit9together with the above center piece3C to be promoted. It is noted that a connector for connection to an external power supply may be also installed in the recess part13. In this case, the cubic puzzle1is charged by making connection between the connector in an exposed state resulting from removal of the panel18from the recess part13and a power supply terminal for power supply from the external power supply. Alternatively, the connector may be also exposed from the panel18so as to enable use of the connector with the panel18fittingly fixed to the recess part14. The corner piece3A has an engagement part23being a plate-like portion formed in a curved surface shape so as to conform to the outside face of the spherical core member2and a body part24. The body part24is formed partly in such cutout shape as obtained by cutting out one vertex portion of a cube. A resultant cutout portion of the body part24configures a recess part24aformed in a concavely curved surface shape so as to conform to the engagement part23. The engagement part23is arranged at a recess part24a-side of the body part24and connected to the body part24. Meanwhile, each of the three faces respectively uncontacted with the recess part24ain the body part24formed in a cubic shape forms the facet8. The corner piece3A is supported to the core member2-side rotatably in the direction around any one of the rotation axes X, Y, Z by engagement with the plurality of adjacent edge pieces3B. The edge piece3B has engagement parts25,26each formed in a circular arc shape so as to conform to the outside face of the spherical core member2and a body part27. The body part27is formed partly in such cutout shape as obtained by cutting out the whole of a corner portion where the two outside faces among the six outside faces cross each other. The engagement part26is integrally extended toward the center piece3C-side from each of the opposite end sides close to the center piece3C in a recess part27abeing a resultant cutout portion of the body part27, while the engagement part25is integrally extended toward an edge piece3B-side from each of the opposite end sides close to the edge piece3B in the recess part27a. Meanwhile, each of the two faces adjacent to the center piece3C in the body part27forms the facet8, with the puzzle body1achanged into the block shape. The edge piece3B is supported rotatably in the direction around any one of the rotation axes X, Y, Z by engagement of a pair of engagement parts26,26extending toward the center piece3C-side in a position between the engagement part11of the center piece3C and the outside face of the core member2. Meanwhile, the corner piece3A is supported rotatably in the direction around any one of the rotation axes X, Y, Z by engagement of the engagement part23thereof in a position between the engagement part25extending toward the corner piece3A-side in each of the three edge pieces3B adjacent to the corner piece3A and the outside face of the core member2. By the way, in the illustrated embodiment, the engagement part23of the corner piece3A does not reach the engagement part11of the center piece3C yet when the puzzle body1ais being changed into the block shape, whereas it gets positioned between the engagement part11of the center piece3C and the outside face of the core member2at the time of the pattern changing operation. In this case, however, it is also allowable to enlarge an extent of the engagement part23of the corner piece3A so as to allow the engagement part23to be positioned between the engagement part11of the center piece3C and the outside face of the core member2, with the puzzle body1A changed into the block shape. The faces other than the face forming the facet8among the faces of the body parts12,24and27in the pair of mutually adjacent pieces3,3constituting the part of the rotating unit9mutually make transmission of force in close proximity to or contact with each other in a linear manner as viewed in the axial direction of any one of the rotation axes X, Y, Z being the rotation fulcrum of the rotating unit9. For that reason, when applying rotational driving force being force of rotation in the direction around any one of the rotation axes X, Y, Z to at least one of the plurality of pieces3constituting the rotating unit9, all the pieces3constituting the rotating unit9are integrally rotated. The mutually adjacent ones of the engagement parts23,25,26of the plurality of pieces3,3arranged in a cubic annular shape around any one of the rotation axes X, Y, Z of the individual rotating unit9are connected together into the form of an annular configuration on the whole as viewed in the axial direction of the one rotation axis X, Y, Z, thereby enabling smooth rotation of the rotating unit9to be obtained. By the way, a position of rotation of one of the plurality of rotating units9arranged in the axial direction of any one of the rotation axes X, Y, Z is relatively changed by rotational driving of the remaining rotating units9by the motor6. Thus, one of the plurality of rotating units9arranged in the axial direction of the one rotation axis X, Y, Z forms a non-drive side rotating unit9A having no need of being rotationally driven by the motor6, while each of the remaining rotating units9forms a drive side rotating unit9B rotationally driven by the motor6. For the illustrated cubic puzzle1made up of the type of 3×3×3 Rubik's cube, each of the six rotating units9positioned at the outside face side at the time when the puzzle body is changed into the block shape is specified as the drive side rotating unit9B, while each of the three rotating units9sandwiched between the two drive side rotating units9B,9B in the axial direction of the one rotation axis X, Y, Z is specified as the non-drive side rotating unit9A. Then, the rotational driving force from the motor6is transmitted to one piece3(the center piece3C in the illustrated embodiment) constituting each drive side rotating unit9B to rotationally drive the each drive side rotating unit9B. It is noted that the motor6may be also provided for each drive side rotating unit9B, or alternatively, it is also allowable to provide a clutch mechanism so that the number of motors6is reduced. For instance, provided that there is one motor6, a clutch mechanism for intermittently transmitting the rotational driving force from the motor6to the drive side rotating units9B may be also provided for each of the drive side rotating units9B. In this case, if the clutch mechanism is constituted so as to be capable of controlling intermittent transmission in response to an electric signal, use of one motor6and the six clutch mechanisms enables the six drive side rotating units9B to be rotationally driven. Alternatively, provided that there is the motor6for each of the rotation axes X, Y, Z, a clutch mechanism for intermittently transmitting the rotational driving force from the motor6to the drive side rotating units9B may be also provided for each drive side rotating unit9B on each of the rotation axes X, Y, Z. In this case, if each of the three clutch mechanisms is constituted so as to be capable of controlling intermittent transmission in response to an electric signal, use of the three motors6and the three clutch mechanisms enables the six drive side rotating units9B to be rotationally driven. It is noted that the middle piece3has an engagement part formed in a curved surface shape so as to conform to the outside face of the spherical core member2, and a body part being a protrusion part formed so as to protrude outwards from the engagement part, wherein a flat outside face of the body part forms the facet8. Further, the center piece3C having no facet8has an engagement part for causing the four corner pieces3A or four middle pieces adjacently arranged around any one of the rotation axes X, Y, Z to integrally rotate in the direction around the one rotation axis X, Y, Z, while permitting the four corner pieces3A or four middle pieces to rotate around the rotation axes X, Y, Z other than the one rotation axis. The rotational driving force from the motor6is transmitted to the center piece3C having no facet8. Next will be described the core member2, the support mechanism4and a controller with reference toFIGS.2,4,5and6. FIGS.4and5are a perspective view and an exploded perspective view respectively showing a core member and various parts included therein, andFIG.6is a perspective view showing the arrangement configuration of a motor, a transmission mechanism and a rotation sensor. The core member2has a unitizing part28individually provided for each of the six motors. There are provided two unitizing parts28for each of the rotation axes X, Y, Z. Each unitizing part28has an inside part piece member29arranged at an inside being a side close to the center of the core member2and an outside part piece member31arranged at an outside being a side away from the center thereof. The inside part piece member29has a first recess part29aon a face (an outside face) at an outside being a side away from the center of the core member2and a second recess part29bon a face (an inside face) at an inside being a side close to the center thereof. Meanwhile, the outside part piece member31has a recess part31aon an inside face being a face at the inside of the outside part piece member31, the recess part31abeing opened toward the center of the core member2. The first recess part29aand the recess part31aare formed in the same or approximately same oblong hole shape in cross sectional view in the axial direction of any one of the rotation axes X, Y, Z passing the unitizing part28in which these recess parts are formed. The first recess part29aof the inside part piece member29is opened toward the outside and arranged on any one of the rotation axes X, Y, Z passing the inside part piece member29. When detachably attaching and fixing the outside part piece member31to the outside face of the inside pat piece member29with screws or the like so as to cover the first recess part29a, the first recess part29aand the recess part31are integrally connected together so that a single installation space is formed. A transmission mechanism32for transmitting the rotational driving force from the motor6to the drive shaft19is housed in the installation space. The drive shaft19constituting the part of the transmission mechanism32gets protruding outwards from the core member2after passing through an insertion hole31bformed ranging from a bottom face of the recess part31aof the outside part piece member31to an outside face the outside part piece member31. The second recess part29bof the inside part piece member29is arranged at a position offset by a predetermined distance from any one of the rotation axes X, Y, Z passing the inside part piece member29, and communicates with the first recess part29athrough a communication hole29aformed in parallel to the one rotation axis X, Y, Z. The second recess part29bis opened toward the inside, and to which a part of the motor6is fixed after being detachably inserted therein in a fitted state. When the motor6is fixed to the second recess part29b, an output shaft6aof the motor6gets protruding toward the inside of the first recess part29aafter passing through the communication hole29c, so that an output gear33is attached and fixed to a portion protruding toward the inside of the first recess part29ain the output shaft6a. By the way, the output shaft6aof the motor6is held in a posture parallel or approximately parallel to any one of the rotation axes X, Y, Z passing the unitizing part28to which the motor6is fixed. The transmission mechanism32has not only the above drive shaft19but also a support shaft34arranged on any one of the rotation axes X, Y, Z passing the unitizing part28provided with the transmission mechanism32, a support shaft36arranged between the support shaft34and the output shaft6a, and a plurality of gears37,38,39,41,42,43,44respectively supported on the support shafts34,36. The two support shafts34,36are supported in parallel to the output shaft6aso as to be laid between the bottom face of the first recess part29aand that of the recess part31a. The large-diameter gear37rotatably supported on the support shaft36and constantly geared with the output gear33is rotated integrally with the small-diameter gear38rotatably mounted on the support shaft36. The large-diameter gear39rotatably supported on the support shaft34and constantly geared with the small-diameter gear38is rotated integrally with the small-diameter gear41rotatably mounted on the support shaft34. The large-diameter gear42rotatably supported on the support shaft36and constantly geared with the small-diameter gear41is rotated integrally with the small-diameter gear43rotatably mounted on the support shaft36. The support shaft34is provided with the large-diameter gear44constantly geared with the small-diameter gear43and rotated integrally with the support shaft34, in addition to the drive shaft19rotated integrally with the support shaft34, thereby allowing the rotational driving force from the motor6to be transmitted to the drive shaft19. The mutually adjacent ones of the six unitizing parts28each unitizing the motor6and the transmission mechanism32respectively provided for each drive side rotating unit9B are detachably fixed together with fixtures such as screws and bolts to form the core member2on the whole, With the above structure, the support mechanism4for supporting each piece3to the core member2-side rotatably centering around any one of the rotation axes X, Y, Z is constituted of the drive shaft19and the engagement parts11,23,25,26and the body parts12,24,27of each piece3. By the way, the motors6,6respectively installed at the pair of unitizing parts28,28positioned on any one of the rotation axes X, Y, Z are arranged at positions offset by a predetermined distance in the mutually opposite directions from the one rotation axis X, Y, Z. For more details, the motors6,6respectively at the pair of unitizing parts28,28on any one of the rotation axes X, Y, Z are arranged at symmetrical positions with respect to the one rotation axis X, Y, Z as viewed in the axial direction of the one rotation axis X, Y, Z. Such arrangement of the motors makes it possible to prevent mutual interference of the motors6,6respectively at the pair of unitizing parts28,28on any one of the rotation axes X, Y, Z from occurring when the motors are brought close to each other in the axial direction of the one rotation axis X, Y, Z. Further, each motor6has flat cutout faces6bformed at symmetrical positions with respect to the shaft center in a cross-sectionally circular-shaped outer peripheral face of the each motor. The motors6,6respectively at the two unitizing parts28,28on any one of the rotation axes X, Y, Z are fixed in a posture in which the cutout faces6bthereof are inclined by about 45 degrees to the outside faces at the side close thereto in the puzzle body1awhich is in the state of being changed into the block shape. Meanwhile, the motor6other than the above motors is fixed in a posture in which the cutout face6bthereof is in parallel or approximately parallel to the outside face at a side close thereto in the puzzle body1bwhich is in the state of being changed into the block shape, thereby preventing the transmission mechanism32including the large-diameter gear37arranged close to the above other motor from interfering with the cutout face6bof the other motor. Moreover, rotation (more specifically, such as the direction, position, amount and speed of rotation) of the support shaft34is capable of being detected by a rotation sensor (a rotation detecting means)46arranged at the side close to the center of the core member2. In the illustrated embodiment, the rotation sensor46is provided individually for each of the transmission mechanisms32to detect a cylindrical magnet45mounted on one end at a side away from the center of the core member2in the support shaft34and rotated integrally with the support shaft34. It is noted that the support shaft34has a bearing50at a position adjacent to the magnet45, so that the bearing50allows the support shaft34to be supported to the core member2in an idling state. By the way, one end mounted with the magnet45in the support shaft34protrudes toward the center of the core member2from the inside part piece member29after passing through an insertion hole29dformed ranging from the bottom face of the first recess part29aof the inside part piece member29to the inside face thereof. Further, in this embodiment, the transmission mechanism32is provided for each drive side rotating unit9B, so that there is provided the rotation sensor46individually for every one of the drive side rotating units9B. For the cubic puzzle1made up of the type of 3×3×3 Rubik's cube, it is noted that although the rotation sensor46performs detection of the operation of the drive side rotating units9B, it is possible of course to play this cubic puzzle by rotating or turning the non-drive side rotating unit9A, wherein the operation of the non-drive side rotating unit9A is detected by the rotation sensors46,46for detection of the operation of the drive side rotating units9B,9B respectively arranged at the opposite sides of the non-drive side rotating unit9A. Besides, a part of a control substrate47mounted with a microcomputer and one or more internal power supplies (not shown) such as cells and rechargeable batteries are housed in the core member2in a fixed fashion. Further, the core member2has, on its outside face, a motor driver48being an IC chip for controlling the presence/absence of rotational driving of the motor6and the direction of rotation thereof in response to an electric signal outputted from the microcomputer, in addition to various wirings49for appropriately electric connection of the rotation sensor46, the microcomputer, the control substrate47and/or the motor driver48. It is noted that the motor driver48and/or the wirings49may be arranged of course at the inside of the core member2and/or other places. In this embodiment, the microcomputer, the power supply, the motor driver48and the wirings49or the like constitute a control unit51(seeFIG.7) for executing control such as drive control of the motor6. Meanwhile, the controller described the above is constituted of the control unit51, the motor driver48and various detecting means including the rotation sensor46. Thus, the core member2allows the controller to be also unitized there-into together with the motor6and the transmission mechanism32. Next will be described the contents of control executed by the controller with reference toFIGS.7and8. FIG.7is a block diagram showing the constitution of a controller. To an input side of the control unit51are connected the six rotation sensors46and an acceleration sensor (an acceleration detecting means)52. To an output side of the control unit51are connected the six motors6. The acceleration senor52is a tri-axial acceleration sensor installed at the core member2-side and enables detection of three-dimensional operation of the puzzle body1ato be performed in real time. The motor6is connected through the motor driver48constituting the part of the control unit51to an electric signal output port in the microcomputer constituting the part of the control unit51likewise. The control unit51has a storage unit51aand stores various information in the storage unit51a. The storage unit51ais constituted of a non-volatile memory or the like unitized into the microcomputer and capable of holding stored information even when the power supply is off. A RAM provides the microcomputer with an execution environment of a program for implementation of various processing and also may constitute a part of the storage unit51a. FIG.8is a flowchart showing a procedure of main processing of the controller. With start of the processing by turning the power ON, the processing proceeds to a step S101. By the way, the current pattern being the display pattern at that point of time concerning the puzzle body1ahaving been changed into the block shape is stored in the storage unit51a. In the step S101, a state of detection by each rotation sensor46is checked, and when confirmed that one pattern changing operation made by the player is detected, the processing proceeds to a step S102. In the step S102, the display pattern changed by the pattern changing operation detected in the step S101is derived as the current pattern, and then, the processing proceeds to a step S103. Namely, the rotation sensor46and the storage unit51aconstitute a pattern identifying means for identifying the display pattern of the puzzle body1ahaving been changed into the block shape. In the step S103, the current pattern to be stored in the storage unit51ais updated with the most current pattern derived from the past display pattern in the most recent step S102, while the latest operation history of the pattern changing operation detected in the most recent step S101is stored in the storage unit51awith time based on a relation with the past pattern changing operation, and then, the processing proceeds to a step S104. Meanwhile, when confirmed in the step S101that no pattern changing operation is detected, the processing proceeds to the step S104. In the step S104, it is checked whether or not a predetermined start state is detected, and when confirmed that the predetermined start state is detected, the processing proceeds to a step S105, whereas when confirmed that no predetermined start state is detected, the processing is returned to the step S101. In the step S105, automatic return control is executed such that a display pattern of the cubic puzzle1(the puzzle body1a) having been changed into the block shape with the display pattern other than the reference pattern displayed thereon is automatically changed to the reference pattern by one or more pattern changing operations with the motor6, and then, the processing is returned to the step S101. The start state refers to a state preliminarily prescribed in order to start the automatic return control and is capable of being arbitrarily set in matching with an interest and so on. In this embodiment, a state to be set as the start state is such that the acceleration sensor52detects that the puzzle body1ahaving been changed into the block shape with the display pattern other than the reference pattern displayed thereon stops its action on a predetermined place such as a horizontal plane and keeps a stably stationary condition. In other words, the start state in this embodiment means a state satisfying two conditions, that is, one condition that the puzzle body1ais changed into the block shape with the current pattern displayed as the display pattern other than the reference pattern, and the other condition that the puzzle body1astops its action on the predetermined place such as the horizontal plane and keeps the stably stationary condition. According to setting of the start state in this manner, such setting allows the rotation sensor46, the storage unit51aand the acceleration sensor52to function as a start state detecting means for detecting the start state. Then, when detection of the start state in the step S104is followed by the processing in a step S105to start execution of the automatic return control, the current pattern being the display pattern at that point of time other than the reference pattern is read out from the storage unit51a. Then, the processing of derivation follows to derive one or more pattern changing operations required to change the puzzle body1achanged into the block shape with the read-out current pattern displayed thereon is subjected to re-change to the puzzle body1achanged into the block shape with the reference pattern displayed thereon. In this embodiment, either of the following two solutions is applied to a solution required for the processing of derivation. One solution is use of a general solution in which each pattern changing operation is to be performed according to a procedure opposite to the procedure of and in a direction opposite (or by reversing the direction of rotation in each pattern changing operation) to one or more pattern changing operations required to change from the reference pattern to the current pattern, because the operation history of the one or more pattern changing operations is sequentially stored in the storage unit51awith time as described the above. The other solution is use of a unique solution different from the general solution. This unique solution has been heretofore well known, and hence, its details will be omitted. It is, however, noted that according to the unique solution, it enables derivation of the display pattern from only the current pattern stored in the storage unit51a, thereby eliminating the need to sequentially store the operation history of the one or more pattern changing operations in the storage unit51aat every pattern changing operation detected by any one of the six rotation sensors46. After the processing to derive one or more pattern changing operations required to change from the current pattern to the reference pattern by using either the general solution or the unique solution, the thus derived one or more pattern changing operations are performed sequentially by the motor6and, upon completion of all the pattern changing operations, the automatic return control is finished. By the way, when the pattern changing operation is performed by any one of the six motors6so that the drive side rotating unit9B as the target of the pattern changing operation in the puzzle body1ahaving been changed into the block shape is turned to a target position where the above drive side rotating unit9B is advanced by quarter or approximately quarter turns in a first direction being one direction as the direction around any one of the rotation axes X, Y, Z, the drive side rotating unit9B may sometimes bring about a state (a failure state) in which the drive side rotating unit9B is caught by the pieces3of the other rotating unit9so that the turning operation of the drive side rotating unit9B is regulated or stopped at a predetermined timing such as the point of time when starting the above turning operation. This failure state is capable of being identified by a result of no detection of any turning motion of the drive side rotating unit9B by the rotation sensor46, even though the electric signal for turning the drive side rotating unit9B as the target of the pattern changing operation is outputted to the motor6-side. The control unit51with the failure state detected by the rotation sensor46finishes the pattern changing operation with safety by the manner in which the drive side rotating unit9B as the target of the pattern changing operation is returned in the first direction to the target position after being turned by the motor6by a predetermined amount in a second direction being a direction opposite to the first direction so that the rotating unit9causing the failure state is turned slightly in a direction opposite to a failure direction (see below) so as to be returned to its original position free from causing any failure state. Meanwhile, the control unit51with the failure state detected by the rotation sensor46may finish the pattern changing operation with safety also by the manner in which the drive side rotating unit9B as the target of the pattern changing operation is turned to the target position by the motor6by a predetermined amount (¾ or approximately ¾ turns, for instance) in the second direction being the direction opposite to the first direction. By the way, although there is no occurrence of any failure state if the rotating unit9rotated centering around any one of the rotation axes X, Y, Z perpendicular to that of the drive side rotating unit9B as the target of the pattern changing operation and having a part constituted of the same piece3as that of the drive side rotating unit9B is turned to its original position, it is considered that displacement of the above rotating unit9by a predetermined amount or more in the failure direction being a predetermined direction causes the failure state. For that reason, it is effective to perform a preliminary turning operation by which the above rotating unit9possibly causing the failure state of the drive side rotating unit9B as the target of the pattern changing operation is turned slightly (by about less than 1 degree, for instance) in the direction opposite to the direction in which the failure state may occur, wherein when the above rotating unit9becomes the target of the pattern changing operation, it is allowable also to perform the preliminary turning operation at the same time as the pattern changing operation in order to prevent the failure state from occurring in the next or later pattern changing operations of the other rotating units9. The preliminary turning operation results in no need of high accuracy as the accuracy required for the pattern changing operation by the motor6, thus providing a great advantage. It is noted that the controller may be also constituted so as to avoid the failure state by firstly turning the drive side rotating unit9B as the target of the pattern changing operation in the second direction by the motor6without turning it in the first direction being the original direction, and thereafter following the same steps as those of the preliminary turning operation, when the rotation sensor46detects the rotating unit9possibly causing the failure state of the drive side rotating unit9B as the target of the pattern changing operation. According to the cubic puzzle1having the above constitution, such cubic puzzle1enables the automatic return control to be executed at an appropriate timing by means of intentionally producing an appropriately-set start state, even for the Rubik's cube having been considered to be difficult to have an operation tool such as a switch at its outside face side, thereby enabling a player to enjoy playing the cubic puzzle than before. Further, for the cubic puzzle1made up of the Rubik's cube in which the failure state is likely to occur, the occurrence of the failure state is efficiently preventable by means of intentionally generating displacement and/or making the change of the turning operation to the first direction and to the second direction. It is noted that the start state is not limited to that in the above embodiment. For instance, the start state may also include, as a part thereof, a state in which the rotation sensor46detects an operation by which any predetermined one of the rotating units9of the puzzle body1ahaving been changed into the block shape is rotated ranging from once to several times and/or rotated by the same or approximately same amount as a predetermined amount in a direction opposite to one direction after being rotated by the predetermined amount in the one direction so as to bring about no change of the display pattern, instead of the state in which the acceleration sensor52detects the stably stop of the puzzle body1a. In this case, the six rotation sensors46constitute a part of the start state detecting means, thereby allowing the acceleration sensor52to be omitted. Alternatively, it is allowable also to define the plurality of types of turning or rotating operations as described the above of the rotating unit9having no need of changing the display pattern, thereby making the control turning unit51execute different control at every type of the turning or rotating operations. For instance, one type of turning or rotating operation is applied to execution of the automatic return control, while the other type of turning or rotating operation is made applicable to execution of control such that displaying of predetermined display contents (such as a sun-flag image, for instance) on one outside face of the block-shaped puzzle body is performed once or repeatedly over a plurality of times. Alternatively, it is allowable also to detect by the acceleration sensor52that the puzzle body1ahaving been changed into the block shape with the display pattern other than the reference pattern displayed thereon keeps a predetermined attitude (a raised attitude of the puzzle body1awhose one corner piece3A is located at a lower end side, with a sharp end side of the one corner piece3A held by hand) for a certain period of time since a change of the attitude of the puzzle body to the predetermined attitude, thereby allowing the predetermined attitude of the puzzle body to be specified as a part of the start state, instead of the state in which the acceleration sensor52detects the stable stop of the puzzle body1a. By the way, a gyro sensor53for detecting the attitude of the puzzle body1amay be also installed in a state of being connected to the input side of the control unit51as shown by a virtual line inFIG.7. The gyro sensor53may also constitute a part of the attitude detecting means to thereby increase the accuracy of detection of the attitudes and/or gestures of the puzzle body1a. For instance, various actions such as shaking the puzzle body with hand and/or moving the puzzle body so as to draw a predetermined trajectory are capable of being set as the gestures, wherein it is possible of course to set these gestures as the part or whole of the start state. Further, a pattern identifying means54may be also installed at the input side of the control unit51as shown by a virtual line inFIG.7. In the above embodiment, identification of the display pattern is performed at every pattern changing operation detected by the six rotation sensors46, whereas a reading means such as a barcode reader, a camera and an IC tag reader for reading identification information such as one-dimensional or two-dimensional barcodes, color patterns and IC tags may also constitute the pattern identifying means54by imparting the identification information to the individual pieces3, while installing the reading means at the core member2-side. Furthermore, an information terminal56may be also installed as shown by a virtual line inFIG.7. The information terminal56has a control unit56afor executing various processing, a storage unit56bfor storing various information, a touch panel type liquid crystal display56cconfiguring an input/output interface, a camera56dand a radio communication means56e. Meanwhile, a radio communication means57enabling radio communication with the information terminal56is connected to the control unit51in an inputtable/outputtable manner. With the above constitution, the information terminal56and the control unit51constitute at least a part of the controller (the whole of the controller in this embodiment), thereby enabling the processing heretofore executed by the control unit51to be partly or wholly executed by the high-performance control unit56aof the information terminal56. Furthermore, the information terminal56and the radio communication means57may also constitute the pattern identifying means. More specifically, the current pattern being the display pattern at that point of time regarding the block-shaped puzzle body1amay be also identified based on a plurality of external appearance image data of the block-shaped puzzle body1aphotographed by the camera56dor the like of the information terminal56. By the way, for identification of the display pattern of the puzzle body1ahaving been changed into the block shape, the image data of the puzzle body1aphotographed from one angle thereof fails to implement such identification, resulting in the necessity of image data of the puzzle body1aphotographed from different angles thereof. It is noted that when the controller is in a situation where sight of the current pattern is lost due to battery run-out and so on, it is possible also to deal with such situation without use of the plurality of image data of the puzzle body1aphotographed from a plurality of angles thereof and/or use of the pattern identifying means54. For instance, the controller may be made to recognize returning of the current pattern to the reference pattern by an informing means being a predetermined informing means, after the change of the display pattern to the reference pattern is made by a manual operation. This informing means is capable of providing various ways of setting such as to provide an informing operation as a predetermined operation detectable by the acceleration sensor52or alternatively, an informing operation performed by radio communication from the information terminal56. Further, the radio communication between the information terminal56and the control unit51enables the puzzle body1ato be remotely controlled, and besides, the rotational driving of the drive side rotating unit9B enables the puzzle body1ato be moved in a linearly moving or turning manner to an intended position on the predetermined place such as the horizontal plane. Next will be described the features different from those in the foregoing embodiment in relation to a different embodiment of the present invention with reference toFIGS.7to9. In this embodiment, the cubic puzzle1is capable of being provided with one or more modes in addition to the automatic return control execution mode. FIG.9is a flowchart showing a procedure of processing executed at every mode switching. The controller temporarily stops the main processing shown inFIG.8when mode switching is detected, followed by starting the processing shown inFIG.9to advance the processing to a step S201. By the way, the controller restarts the processing shown inFIG.8upon completion of a series of processing shown inFIG.9. By the way, mode switching is set as an operation free from overlapping with the operation regarding the start state. For instance, when making it one of the automatic return control conditions that the puzzle body1ahaving been changed into the block shape is placed on a placement surface such as the horizontal plane such that a first outside face being one of the outside faces of the puzzle body1ais in contact with the placement surface, mode selection is performed depending on which of second, third and fourth outside faces being the three outside faces other than the surface set as the first outside face of the puzzle body1awould be faced to the placement surface side when placing the puzzle body1aon the placement surface, wherein each of such mode selecting operations corresponds to the mode switching. For instance, a teaching mode (see below) is regarded as being selected when the puzzle body1awith the second outside face thereof faced to the placement surface side is placed on the placement surface, a challenge mode (see below) is regarded as being selected when the puzzle body1awith the third outside face thereof faced to the placement surface side is placed on the placement surface, and a scramble mode (see below) is regarded as being selected when the puzzle body1awith the fourth outside face thereof faced to the placement surface side is placed on the placement surface. In the step S201, when the mode selected by the mode switching is the teaching mode, the processing proceeds to a step S202, when the mode selected by the mode switching is the challenge mode, the processing proceeds to a step S203, and when the mode selected by the mode switching is the scramble mode, the processing proceeds to a step S204. In the step S202, the teaching mode is executed, and thereafter, the processing shown inFIG.9is also finished upon completion of the execution of the teaching mode. During the execution of the teaching mode, the controller executes updating of the display pattern to be stored in the storage unit51aat every pattern changing operation and storing of the operation history of the pattern changing operation in the storage unit51ain order of time series while enabling identification of the current pattern being the display pattern at that point of time, like the processing shown inFIG.8. When continuation of a state of no detection of any player's manual pattern changing operation by the rotation sensor46occurs for a fixed time or more, for instance, in the middle of the execution of the teaching mode, the controller executes the processing of derivation of one or more pattern changing operations required to change to the reference pattern according to the above solution to turn, by a predetermined amount of not more than quarter turns, the rotating unit9to be moved next, and followed by giving to the player such suggestion that the thus turned rotating unit9is to be operated. For more details about the teaching mode, the teaching mode makes it possible to learn how to solve the cubic puzzle1in stages. For instance, learning of a procedure of changing the display pattern to the reference pattern is made possible in stages so as to follow an operation sequence being first one of the drive side rotating units9B, then the non-drive-side rotating unit9A and finally the remaining drive-side rotating unit9B, while the control unit51is constituted so as to make repetitive learning possible in each stage by performing, by the motor6, the change of the display pattern of the puzzle body1ato a teaching pattern being a display pattern made to correspond to a selected learning stage. Further, downloading of the teaching pattern from the internet and so on is also made possible by the information terminal56. Furthermore, the information terminal56may be also made to display information of the pattern changing operation to be performed next, because the suggestion of the next pattern changing operation for the change from the current pattern is given to the player. Then, the controller finishes the execution of the teaching mode upon completion of the change of the display pattern to the reference pattern. Further, the control unit may also bring back the process of reaching the reference pattern by appropriately driving the motor6at an arbitrary timing such as a finish time of the teaching mode, because the operation history of one or more pattern changing operations required to reach the reference pattern is stored in the storage unit51asequentially in order of time series by the above processing. In the step S203, the challenge mode is executed, and thereafter, the processing shown inFIG.9is also finished upon completion of the execution of the challenge mode. During the execution of the challenge mode, the controller executes updating of the display pattern to be stored in the storage unit51aat every pattern changing operation and storing of the operation history of the pattern changing operation in the storage unit51ain order of time series while enabling identification of the current pattern being the display pattern at that point of time, like the processing shown inFIG.8. The controller is constituted so as to make a timer of the controller measure a time required to change to the reference pattern by one or more pattern changing operations manually made by the player in the middle of the execution of the challenge mode. Upon completion of the change of the display pattern to the reference pattern, the controller finishes the execution of the challenge mode after informing the player about a play time being the time required until then through a speaker58provided on the output side of the control unit51so as to be arranged at the core member2-side. By the way, the controller may also bring back the process of reaching the reference pattern by appropriately driving the motor6at an arbitrary timing such as a finish time of the challenge mode, because the operation history of one or more pattern changing operations required to reach the reference pattern is stored in the storage unit51asequentially in order of time series, like the processing in the teaching mode. Further, setting of a time limit which imposes limitation on the play time is also possible. In this case, the controller informs the player about the time limit through the speaker58at the time when switching to the challenge mode is made, and further gives to the player the information about the residual time through the speaker58and/or by screen display of the information terminal56or the like during playing. When the change of the display pattern to the reference pattern could not be completed within the time limit, the controller finishes the execution of the challenge mode after informing by the same means as the above to the player that the challenge mode results in failure. It is noted that during the execution of the challenge mode, a lapse of the time limit may be also informed to the player by vibrating the puzzle body1awith a vibration motor (a vibrating means)59connected to the output side of the control unit51so as to be arranged at the core member2-side. Further, according to this vibration motor59, it enables various conditions other than the lapse of the time limit to be also reported in such a manner as to vary an interval and/or length of vibrations. Further, the lapse of the time limit may be also informed to the player by the manner in which the controller executes one or more randomly or arbitrarily selected pattern changing operations to change the display pattern. Furthermore, whenever a predetermined time has elapsed in the middle of playing by the player after switching to the challenge mode, the controller may also execute one or more randomly or arbitrarily selected pattern changing operations to interfere with the change to the reference pattern in order to increase a difficulty level, thereby allowing the entertainment property to be improved. By the way, a change of difficulty level is easily made by increasing/decreasing the number of times of execution of the arbitrary pattern changing operations to be executed whenever the predetermined time has elapsed. In the step S204, the scramble mode is executed, and thereafter, the processing shown inFIG.9is also finished upon completion of the execution of the scramble mode. During the execution of the scramble mode, the controller executes updating of the display pattern to be stored in the storage unit51aat every pattern changing operation and storing of the operation history of the pattern changing operation in the storage unit51ain order of time series, while enabling identification of the current pattern being the display pattern at that point of time, like the processing shown inFIG.8. With start of the execution of the scramble mode, the controller executes one or more randomly or arbitrarily selected pattern changing operations to change the display pattern, and thereafter finishes the execution of the challenge mode upon completion of the change of the display pattern. It is noted that the acceleration sensor52or both the acceleration sensor52and the gyro sensor53are capable of detecting such player's actions as shaking the puzzle body1awith hand and/or player's gestures of drawing characters such as L-letter in the air and so on, wherein these gestures may be also individually assigned to the operations of switching to the teaching mode, the challenge mode and the scramble mode, or alternatively, specified as the part of the start state. Next will be described the features different from those in the foregoing embodiment in relation to a further different embodiment of the present invention with reference toFIGS.10to12. FIG.10is a perspective view showing a transmission mechanism according to a further different embodiment of the present invention,FIG.11is a view as viewed from a direction shown by arrow A inFIG.10, andFIG.12is a perspective view showing a roller and a roller holding member, each of which constitutes a part of a clutch mechanism. A transmission mechanism32shown in these FIGURES is provided with a clutch mechanism constituted so as to permit rotational driving force to be transmitted from the motor6to the rotating units9, while preventing an operation amount obtained when manually rotating the rotating unit (the center piece3C) by the player from being transmitted to the motor6. This clutch mechanism provides advantages of being capable of suppressing the wear or damages of the transmission mechanism32, in addition to enhancement of operability resulting from a reduction in operation load at the time of manually rotating or turning the rotating units9by the player. The clutch mechanism is installed between an outer peripheral face of the drive shaft19and an inner peripheral face of the gear44. When the drive shaft19is rotated, the clutch mechanism causes no transmission of the rotating operation force thereof to the gear44so that the gear44is held in a stopped state, whereas when the gear44is rotated, the clutch mechanism permits the rotational driving force thereof to be transmitted to the drive shaft19so that the drive haft19is rotationally driven. Next will be described the constitution of the clutch mechanism. The drive shaft19has an outer peripheral face formed in a circular shape centered on the shaft center of the drive shaft19as viewed in the axial direction. The gear44has an inner peripheral face formed in a regular polygonal shape centered on the shaft center of the drive shaft19as viewed in the axial direction of the drive shaft19. Because of the inner peripheral face shape of the gear44, the inner peripheral face of the gear44is constituted of a plurality of contact faces61abeing flat faces and a plurality of corner parts61beach formed at a portion where the mutually adjacent contact faces61a,61aare in contact with each other. Between the inner peripheral face of the gear44and the outer peripheral face of the drive shaft19, there are provided a plurality of rollers62rotatably arranged side by side in an annular shape circumferentially around the drive shaft19. The number of rollers62is set to be the same as the number (eight, in the illustrated embodiment) of the corner parts61b(the contact faces61a) of the inner peripheral face of the gear44. Each roller holding part65of a roller holding member63for holding the rollers62is arranged between the mutually adjacent rollers62,62. The roller holding member63is formed in a circular ring-like shape so as to conform to the outer peripheral face of the drive shaft19and the inner peripheral face of the gear44as viewed in the axial direction of the drive shaft19, and has recess parts63ainto which the rollers62are respectively received in an idling state, the recess parts being spaced at predetermined intervals. The roller holding member63is configured such that each of mutually adjacent portions across each recess part63aforms each holding part65. In other words, the recess parts63aand the holding parts65being respectively equal in number to the rollers62are annularly arranged in turns at predetermined intervals in a space between the outer peripheral face of the drive shaft19and the inner peripheral face of the gear44. Each holding part65has, at both circumferential end sides thereof, inclined faces65a,65aeach inclined such that a length of the inner peripheral face of the each holding part65is shorter than a length of the outer peripheral face thereof. Each recess part63aforming a space between the mutually adjacent holding parts65,65gets gradually narrow in width toward the side away from the shaft center of the drive shaft19due to the inclined faces65a,65a. Each roller62partly protrudes toward the inner peripheral face of the gear44more than the outer peripheral face of each holding part65. When each roller62is located at the corner part44b-side in the inner peripheral face of the gear44, a slight gap is made between the each roller62and the inner peripheral face of the gear44, resulting in a state where idling of the each roller62is made possible. Meanwhile, when each roller62is located at the contact face44a-side in the inner peripheral face of the gear44, contact of the each roller62with the inner peripheral face of the gear44is made, resulting in a state where rotation of the each roller is regulated. The clutch mechanism is constituted of the outer peripheral face of the drive shaft19, the inner peripheral face of the gear44, the plurality of rollers62and one roller holding member63. When the drive shaft19is manually operated for rotation by the player, the outer peripheral face of the drive shaft19provides an idling operation to each roller62while making sliding on the inner peripheral face of the roller holding member63, resulting in no transmission of any rotational power to the gear44-side. Meanwhile, when the rotational power transmitted from the motor6is applied to the gear44to rotate the gear44, each contact face61aon the inner peripheral face of the gear44is pressed toward the outer peripheral face of the drive shaft19in a state where the each contact face makes contact with each roller62such that the rotation of the each roller is regulated, thereby allowing the rotational power of the gear44to be transmitted to the drive shaft19to rotationally drive the drive shaft19. Next will be described the features different from those in the foregoing embodiment in relation to a further different embodiment of the present invention with reference toFIG.13. FIG.13is a perspective view showing a cubic puzzle according to a further different embodiment of the present invention. In the foregoing embodiment, the connector for connection with the external power supply is installed at the center piece3C-side, whereas in this embodiment, there is provided the connector installed at the inside of one edge piece3B (more specifically, at a portion near the one edge piece3B in the outside face of the core member2). For more details, the body part27of the one edge piece3B has a fixing part64and an angular cover member66having two facets8and swingably supported to the fixing part. When the cover member66is opened in a direction of separating from the fixing member64to expose the fixing member64to the outside, an external access to a connector (not shown) provided at the core member2-side is made possible through an access hole64aformed in the fixing member64, thereby enabling the supply of power from the external power supply. Meanwhile, when the cover member66is closed toward the fixing part64-side by a swinging motion, a usual condition of functioning as the edge piece3B is obtained. Next will be described the features different from those in the foregoing embodiment in relation to a further different embodiment of the present invention with reference toFIGS.7and14. FIG.14is a perspective view showing a center piece according to a further different embodiment of the present invention. The engagement part11of the center piece3C has an exposure hole11athrough which a core member2-side portion is exposed to the outside. The exposure hole11ais in the form of a round hole formed in a portion close to each of four corners of the body part12and is located at a position free from being covered with the body parts24,27of the pieces3adjacent to this center piece3C. Further, a LED60(seeFIG.7) being a light source installed at the core member2-side may be also exposed to the outside through each exposure hole11a. This enables various conditions to be informed to the player and/or an owner of the cubic puzzle with the presence/absence of light emission from the LED60and/or the change of color of emitted light. For instance, it is possible to inform the player of the completion or not of charging when performing a charging operation, with the presence/absence of light emission from the light source, the change of a light-emitting pattern and/or the change of the color of emitted light. Further, an external access to the power supply connector at the core member2-side is also made possible by utilizing each exposure hole11a. In other words, it is possible also to charge the cubic puzzle1by utilization of each exposure hole11a. Next will be described the features different from those in the foregoing embodiment in relation to a further different embodiment of the present invention with reference toFIG.15. FIG.15is a perspective view showing the arrangement configuration of batteries. In the foregoing embodiment, the single battery is installed as an internal power supply at a side opposite to the control substrate47in the outside face side of the core member2, whereas in this embodiment, there are provided two rechargeable batteries67,67respectively arranged at symmetrical positions with respect to the core member2. Such arrangement of the batteries67is obtained by making use of a space which is made resulting from inclining the cutout face6bof the motor6toward the outside face. More specifically, each battery67is arranged in a well fitted state in a posture inclined from the cutout face6bof the motor6toward the drive shaft19which is at the side opposite to and coaxial with the drive shaft19to which the rotational driving force is transmitted from the motor6. Next will be described the features different from those in the foregoing embodiment in relation to a further different embodiment of the present invention with reference toFIG.16. FIG.16is a perspective view showing the constitution of a further different embodiment of the present invention. The inventor of the present application has found out that the cubic puzzle1made up of the type of 3×3×3 Rubik's cube is capable of changing from the arbitrary display pattern to the reference pattern without rotating any one of the drive side rotating units9B and hence, this embodiment relates to a constitution obtained by making use of the above findings. More specifically, both the unitizing part28and the transmission mechanism32other than the motor6and the drive shaft19respectively provided for the one drive side rotating unit9B are omitted to reduce the number of components. The drive shaft19is rotatably supported to the core member2-side by the support shaft34and is capable of being manually rotated or turned by the player, wherein the manual rotation or turning motion thereof is detectable by the rotation sensor46. Further, a disk-shaped magnet68may be also fixedly mounted through the support shaft34to a space which is made resulting from omitting the transmission mechanism and the unitizing part. Meanwhile, there is provided a magnetically floating unit69separately from the puzzle body1a. The magnetically floating unit69has a single circular ring-shaped permanent magnet70amounted on a substrate71or a plurality of permanent magnets70aannularly arranged side by side (the former is shown in the illustrated embodiment) and one or more (four, in the illustrated embodiment) electromagnets70blocated at positions close to the center of a ring-shaped portion of the permanent magnet70a, and is arranged just below the puzzle body1a. Then, floating force is applied to the puzzle body1aby a repulsion action between a magnetic field constantly generated from the permanent magnet70aand a magnetic field constantly generated from the magnet68, while control of a plane position of the puzzle body1ais performed by the magnetic field generated from the electromagnet70b. By so doing, it becomes possible to float the puzzle body1ajust above the magnetically floating unit69. If each rotating unit9of the cubic puzzle1is rotationally driven by the motor6in such floating state of the puzzle body, an improved entertainment property is obtained. It is noted that it is possible of course to provide the magnetically floating unit69in a state where all the six drive side rotating units9B are made drivable for rotation, and in this case, it is possible to deal with by miniaturizing each component, or alternatively, by enlarging the cubic puzzle1. Next will be described the features different from the foregoing embodiment in relation to a further different embodiment of the present invention with reference toFIG.17. FIG.17is a perspective view showing the constitution of a further different embodiment of the present invention. In this embodiment, the supply of power to the illustrated cubic puzzle1is performed by a wireless charging device72. The wireless charging device72has a power transmission coil73driven by an external power supply, a power receiving coil74provided at the core member2-side, and a rectification circuit for rectifying AC voltage generated by electromagnetic induction caused by the power transmission coil73and the power receiving coil74into DC voltage for charging the battery (not shown). Then, the AC voltage generated at the power receiving coil72-side by electromagnetic induction is converted into the DC voltage by the rectification circuit so that charging is made on the battery. EXPLANATION OF REFERENCE NUMERALS 1: Cubic puzzle1a: Puzzle body2: Core member4: Support mechanism3: Piece3A: Corner piece3B: Edge piece3C: Center piece6: Motor (Actuator)8: Facet9: Rotating unit46: Rotation sensor51: Control unit51a: Storage unit52: Acceleration sensor (Acceleration detecting means,attitude detecting means)54: Pattern identifying means56: Information terminal57: Radio communication meansX: Rotation axisY: Rotation axisZ: Rotation axis
75,921
11857886
DETAILED DESCRIPTION The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts. While aspects and embodiments are described in this application by illustration to some examples, those skilled in the art will understand that additional implementations and use cases may come about in many different arrangements and scenarios. Innovations described herein may be implemented across many differing platform types, devices, systems, shapes, sizes, and/or packaging arrangements. FIG.1illustrates a perspective view of an example turntable load station100according to an aspect of the present disclosure. The station100may be used for loading guests/passengers onto (and unloading guests/passengers from) a theme park attraction or ride. In an aspect, the station100may be used for other applications such as tracked or trackless conveyance systems, for example. The station100may include a turntable102(e.g., movable turntable structure) configured to rotate about an axis120. In an aspect, the guests/passengers may stand on top of the turntable102to be conveyed into/out of the attraction/ride as the turntable102rotates about the axis120. In an aspect, wheels (e.g., passive load wheels) may be mounted to a central underside portion of the turntable102to help rotate the turntable102about the axis120. The station100may further include bogie assemblies104(e.g., 8 or more bogie assemblies) mounted to an outer peripheral underside portion of the turntable102. Each bogie assembly104serves as a chassis or framework for carrying a wheel106. The station100may also include a structural pier (or ring)110(e.g., concrete pier/ring) formed around the axis120and a track108that sits on a top surface of the structural pier110. Each of the wheels106are configured to roll along the track108. Accordingly, when the wheels106are driven to roll along the track108(e.g., via a drive motor), each bogie assembly104carrying a respective wheel106is caused to move along the track108and rotate about the axis120, which in turn causes the turntable102coupled to the bogie assemblies104to also rotate about the axis120. When the wheels106are caused to stop rolling along the track108(e.g., via wheel braking or disabling a drive motor), each bogie assembly104carrying a respective wheel106is caused to stop rotating about the axis120, which in turn causes the turntable102to also stop rotating about the axis120. FIG.2is a side view of a bogie assembly104mounted in the turntable load station100according to an aspect of the present disclosure.FIG.3is a side view of a bogie assembly104mounted in the turntable load station100according to another aspect of the present disclosure. Referring toFIGS.2and3, the bogie assembly104is mounted to the underside of the turntable102and above the track108. The bogie assembly104may include mechanical fittings and/or support structures configured to adjust a position of the bogie assembly104/wheel106with respect to the track108(e.g., toe adjustment). The bogie assembly104may further include a drive assembly202operationally coupled to (or including) the wheel106carried by the bogie assembly104. The drive assembly202may be a combination of various devices/systems configured to produce a torque that drives the wheel106to roll. For example, the drive assembly202may be a combination of at least a motor, a transmission (gearbox), and an axle coupled to the wheel106. The combination may further include a brake system configured to decelerate and/or stop wheel roll. In an aspect, the drive assembly202is electrically powered. For example, a slip ring connected through a central portion of the turntable102may provide electric power to the drive assembly202. The slip ring may also provide an Ethernet connection through which the drive assembly202may receive input signals from and/or send output signals to a control system. When a respective wheel106of the station100is driven by the drive assembly202to roll along the track108, the bogie assembly104carrying the respective wheel106is caused to move along the track108and rotate about the axis120. As such, a combination of rotating bogie assemblies104causes the turntable102coupled to the rotating bogie assemblies104to also rotate about the axis120. When the respective wheel106is caused to stop rolling along the track108via the drive assembly202, the bogie assembly104carrying the respective wheel106is caused to stop along the track108and cease rotation about the axis120. Thus, the combination of stopped bogie assemblies104causes the turntable102coupled to the stopped bogie assemblies104to also cease rotation about the axis120. The axis120(seeFIG.1) is located at a central portion of the turntable102. As shown inFIGS.2and3, the axis120is located toward a direction A with respect to the bogie assembly104. Accordingly, in the aspect shown inFIG.2, the bogie assembly104may be mounted under the turntable102and above the track108such that a rear portion204of the drive assembly202is positioned in closer proximity to the axis120(located toward the direction A) than the wheel106. As such, the wheel106is positioned in closer proximity to an outer peripheral surface B of the turntable102than the rear portion204of the drive assembly202. In another aspect shown inFIG.3, the bogie assembly104may be mounted under the turntable102and above the track108such that the wheel106is positioned in closer proximity to the axis120(located toward the direction A) than the rear portion204of the drive assembly202. As such, the rear portion204of the drive assembly202is positioned in closer proximity to the outer peripheral surface B of the turntable102than the wheel106. In an aspect, the drive assembly202may fail to perform during the course of its service life, e.g., the motor, the gearbox, the axle, and/or the brake system of the drive assembly202may become disabled. Accordingly, when failure occurs, a station technician may need to perform various procedures that contribute to a prolonged downtime of the station100and/or subject the technician to potentially dangerous conditions. For example, to replace a non-operational drive assembly, the station technician may need to sequentially secure the drive assembly to a support structure (e.g., using a beam clamp), separate the drive assembly from the wheel/bogie assembly, suspend the drive assembly, and lower the non-operational drive assembly (e.g., with a jack) to the floor. The station technician may then repeat the steps in reverse to install a working replacement. Given the weight of the drive assembly (e.g., approximately 2600 pounds) and the height at which the drive assembly is mounted onto the turntable load station (e.g., approximately 12 feet above the floor or higher), performing all of the necessary steps to replace the disabled drive assembly subjects the station technician to numerous ergonomic and safety risks that are potentially harmful. For example, the station technician is subjected to the risk of having the weight of the drive assembly fall on his/her body and/or the risk of falling from a high elevation at which the drive assembly is mounted. Moreover, performing such sequential manual operations is time-consuming leading to a prolonged downtime of the station100. FIG.4illustrates a replacement system400for quickly uninstalling a non-working drive assembly of a turntable load station100and installing a working replacement in a safe manner according to an aspect of the present disclosure. In an aspect, the system400includes a self-contained drive assembly402coupled to a wheel106. Although not shown, the drive assembly402and the wheel106depicted inFIG.4are coupled to a bogie assembly104(as depicted inFIGS.2and3). In an aspect, the drive assembly402may be mounted in a turntable load station structure without being coupled to a wheel. For example, the drive assembly402may be tangentially mounted to an inner radial side surface of the structural pier110(e.g., if the bogie104is mounted such that a rear portion of the drive assembly is in closer proximity to a center axis120of the station100than the wheel106as depicted inFIG.2). Alternatively, the drive assembly402may be tangentially mounted to an outer radial side surface of the structural pier110(e.g., if the bogie104is mounted such that the wheel106is in closer proximity to the center axis120of the station100than the rear portion of the drive assembly as depicted inFIG.3). As such, the drive assembly402may be pre-mounted in the station structure as a spare drive assembly ready to replace a working drive assembly that may become disabled. The system400further includes a translating cartridge404configured to mount the drive assembly402to the inner/outer radial side surface of the structural pier110. The translating cartridge404may include an adapter plate406that is detachably coupled to an underside of the drive assembly402, and at least one upright bar412that extends from the adapter plate406and is detachably coupled to a side surface of the drive assembly402. The translating cartridge404may further include one or more linear slide rails408coupled to an underside of the adapter plate406and a tipping slide carriage410also coupled to the underside of the adapter plate406. The one or more slide rails408engage the slide carriage410such that when the drive assembly402is disconnected from the wheel106/bogie assembly104, the one or more slide rails408may pass through the slide carriage410in a longitudinal direction as the translating cartridge404(and the drive assembly402coupled to the translating cartridge404) is pulled away from the wheel106(e.g., pulled in a direction B shown inFIG.4). In an aspect, the wheel106may remain connected to the drive assembly402, and therefore, the drive assembly402and the wheel106may both be pulled in the direction B (or direction E shown inFIG.5) when the translating cartridge404is moved in such direction. As such, a jack may be used to increase space between the track108and the turntable102(e.g., increase by approximately 0.75 inches) to clear a size of the wheel106and allow the wheel106to be moved in the direction B (or direction E). In an aspect, the slide carriage410may be mounted to the inner/outer radial side surface of the structural pier110(either directly or indirectly via a supporting member or structure). FIG.5illustrates example positions/orientations and movement of the replacement system400depicted inFIG.4according to an aspect of the present disclosure. Here, the drive assembly402is coupled to the translating cartridge404. At a position502, the drive assembly402is at an upright functional orientation (e.g., horizontal or near-horizontal orientation). At the upright functional orientation, the drive assembly402may be coupled to the wheel106/bogie assembly104. If, for example, the drive assembly402becomes disabled (e.g., motor stops working), the drive assembly402may be decoupled from the wheel106/bogie assembly104. After decoupling, the drive assembly402may be pulled away from the wheel106(i.e., pulled in the direction B). That is, the drive assembly402may be pulled in the direction B such that the one or more slide rails408pass through the slide carriage410in the longitudinal direction until the drive assembly402reaches a position504. At the position504, the slide carriage410may facilitate a tilt of the drive assembly402in a downward and away direction (e.g., direction C) with respect to the wheel106. As shown inFIG.5, the drive assembly402moves from an upright functional orientation (e.g., horizontal or near-horizontal orientation) at the position504to an intermediate position506before ultimately reaching a serviceable orientation (e.g., vertical or near-vertical orientation) at a position508. When the drive assembly402is in the serviceable orientation at the position508, the translating cartridge404(carrying the drive assembly402) may be moved radially along the inner/outer radial side surface of the structural pier110, as will be described below. As described above, the drive assembly402coupled to the translating cartridge404is moved starting from the upright functional orientation (at the position502) and ending at the serviceable orientation (at the position508). However, in an aspect, the translating cartridge404alone may be moved from the upright functional orientation (position502) to the serviceable orientation (position508) without being coupled to the drive assembly402. In another aspect, the wheel106/bogie assembly104including the drive assembly402coupled to the translating cartridge404may be moved from the upright functional orientation (position502) to the serviceable orientation (position508). In a further aspect, the drive assembly402and/or the translating cartridge404may be moved starting from the serviceable orientation (position508) and ending at the upright functional orientation (position502). For example, the drive assembly402and/or the translating cartridge404may initially be in the serviceable orientation (position508). The slide carriage410may then facilitate a tilt of the drive assembly402and/or the translating cartridge404in an upward and forward direction (e.g., direction D) with respect to the wheel106. The drive assembly402and/or the translating cartridge404may move from a serviceable orientation at the position508to the intermediate position506before reaching an upright functional orientation at the position504. Notably, as the drive assembly402/translating cartridge404moves from the intermediate position506to the upright functional orientation at the position504, a center of mass of the drive assembly402/translating cartridge404moves past a pivoting position such that the drive assembly402/translating cartridge404can slide toward the wheel106. At the position504, the drive assembly402and/or the translating cartridge may be pushed toward the wheel106(i.e., pushed in the direction E) such that the one or more slide rails408pass through the slide carriage410in the longitudinal direction until the drive assembly402reaches the upright functional orientation at the position502. At the position502, the drive assembly402may be coupled to the wheel106/bogie assembly104. As described above, the drive assembly402and/or the translating cartridge404is moved starting from the serviceable orientation (at the position508) and ending at the upright functional orientation (at the position502). However, in an aspect, the wheel106/bogie assembly104including the drive assembly402coupled to the translating cartridge404may be moved from the serviceable orientation (position508) to the upright functional orientation (position502). FIG.6illustrates an example implementation600of the replacement system400according to an aspect of the present disclosure.FIG.7illustrates another example implementation700of the replacement system400according to an aspect of the present disclosure. In an aspect, a swap rail602may be joined onto a radial side surface of the structural pier110. As shown, the swap rail602runs along the outer radial side surface of the structural pier110. However, in other aspects, the swap rail602may run along the inner radial side surface of the structural pier110(e.g., if the bogie assembly104is mounted such that a rear portion of the drive assembly402is in closer proximity to a center axis120of the station100than the wheel106as depicted inFIG.2). The translating cartridge404may be movably mounted to the swap rail602. As such, the translating cartridge404may move along the radial side surface of the structural pier110by sliding along the swap rail602when the translating cartridge404is set in a serviceable orientation (e.g., position508inFIG.5). In an aspect, a second translating cartridge604(similar to the translating cartridge404described above) may also be mounted to the swap rail602. Thus, the second translating cartridge604may also move along the radial side surface of the structural pier110by sliding along the swap rail602when the second translating cartridge604is set in a serviceable orientation (e.g., position508inFIG.5). In an aspect, the second translating cartridge604may be pre-loaded with a working drive assembly606(hot spare or replacement unit). In another aspect, the second translating cartridge604may be pre-loaded with a second wheel/bogie assembly including the working driving assembly606. In an example operation, when a drive assembly402is disabled (e.g., becomes a non-working drive assembly) and can no longer roll the wheel106to help rotate the turntable102about the axis120, an empty translating cartridge404set in the serviceable orientation may be moved along the radial side surface of the structural pier110via the swap rail602to rest underneath the non-working drive assembly402. Alternatively, the non-working drive assembly402may be moved toward the empty translating cartridge404, e.g., by rotating the turntable102such that the wheel106/bogie assembly104positions the non-working drive assembly402above the empty translating cartridge404. For example, a control system of the turntable load station100may utilize encoders and absolute positioning to automatically index the turntable102until the non-working drive assembly402is at an unloading position along the track108(e.g., above the empty translating cartridge404). Once the non-working drive assembly402is positioned above the empty translating cartridge404, the translating cartridge404may be tilted in an upward and forward direction (e.g., direction D shown inFIG.5) to an upright functional orientation. Upon reaching the upright functional orientation, the translating cartridge404may be pushed forward toward the wheel106(e.g., direction E shown inFIG.5) until the translating cartridge404is close enough to be coupled to the drive assembly402. For example, the translating cartridge404may be pushed in the direction E until the adapter plate406and/or one or more upright bars412of the translating cartridge404can be mated to mounting points formed on an underside and/or side surfaces of the drive assembly402. In another example, the adapter plate406and/or the one or more upright bars412may be coupled to the underside and/or the side surfaces of the drive assembly402via a vice clamp or strapping mechanism. In an aspect, to help minimize a total duration for replacing the non-working drive assembly402and increase safety by minimizing an operator's/technician's exposure to hazards, any of the movements of the translating cartridge described herein may be automated. For example, after the control system indexes the turntable102until the non-working drive assembly402is at the unloading position along the track108, the control system may further deploy the translating cartridge404to automatically position itself under the non-working drive assembly402and/or tilt itself to the upright functional orientation to be able to couple with the non-working drive assembly402. When the translating cartridge404is coupled to the drive assembly402, the drive assembly402may then be decoupled from the bogie assembly104(e.g., by loosening fasteners binding the drive assembly402to the bogie assembly104). Alternatively, when the translating cartridge404is coupled to the drive assembly402, the bogie assembly104including the drive assembly402coupled to the translating cartridge404may be decoupled from the turntable102. In an aspect, the translating cartridge404coupled to the drive assembly402is configured to lessen or remove compressive force between the drive assembly402and the bogie assembly104(and/or between the bogie assembly104and the turntable102). Accordingly, when the compressive force is lessened or removed, the decoupling of the drive assembly402from the bogie assembly104(or the decoupling of the bogie assembly104from the turntable102) may be performed in an easier and safer manner. Thereafter, the translating cartridge404carrying the non-working drive assembly402(or the bogie assembly104including the non-working drive assembly402) may be tilted in a downward and away direction (e.g., direction C shown inFIG.5) to return to the serviceable orientation. Once in the serviceable orientation, the translating cartridge404/non-working drive assembly402may be moved (e.g., in a direction610) along the radial side surface of the structural pier110away from the wheel106/bogie assembly104(or away from the turntable102). In an aspect, considering a weight of the drive assembly402carried by the translating cartridge404, the translating cartridge404may be mounted to the swap rail602via a counterbalance pivot702. The counterbalance pivot702may help tilt the translating cartridge404/drive assembly402in the upward and forward direction (direction D shown inFIG.5) and/or the downward and away direction (direction C shown inFIG.5) by offsetting the weight of the drive assembly402exerted in one direction or another. In an aspect, the counterbalance pivot702may employ a spring, gas cylinder, or other type of counterbalancing device. After the translating cartridge404/non-working drive assembly402is moved away from the wheel106/bogie assembly104(or away from the turntable102), the second translating cartridge604pre-loaded with the working drive assembly606(hot spare, replacement unit, or second drive assembly) and set in the serviceable position (or pre-loaded with a second bogie assembly including the working drive assembly606) may be moved (e.g., in a direction612) along the radial side surface of the structural pier110to rest underneath the bogie assembly104(or the turntable102). Once the second translating cartridge604is positioned underneath the bogie assembly104(or the turntable102), the second translating cartridge604carrying the working drive assembly606(or carrying the second bogie assembly including the working drive assembly606) may be tilted in an upward and forward direction (e.g., direction D shown inFIG.5) to an upright functional orientation. In an aspect, considering a weight of the working drive assembly606carried by the second translating cartridge604, the second translating cartridge604may be mounted to the swap rail602via a second counterbalance pivot (similar to the counterbalance pivot702). The second counterbalance pivot may help tilt the second translating cartridge604/working drive assembly606in the upward and forward direction (direction D shown inFIG.5) and/or the downward and away direction (direction C shown inFIG.5) by offsetting the weight of the working drive assembly606exerted in one direction or another. Upon reaching the upright functional orientation, the second translating cartridge604/working drive assembly606may be pushed forward toward the bogie assembly104(e.g., direction E shown inFIG.5) until the working drive assembly606is close enough to be coupled to the bogie assembly104. The working drive assembly606may then be coupled to the bogie assembly104(e.g., by tightening fasteners binding the working drive assembly606to the bogie assembly104). In an aspect, the second translating cartridge604coupled to the working drive assembly606is configured to increase compressive force between the working drive assembly606and the bogie assembly104when installing the working drive assembly606(or between the second bogie assembly including the working drive assembly606and the turntable102). Accordingly, when the compressive force is increased, the coupling of the working drive assembly606to the bogie assembly104(or the coupling of the second bogie assembly including the working drive assembly606to the turntable102) may be performed in an easier and safer manner. After coupling the working drive assembly606to the bogie assembly104(or coupling the second bogie assembly including the working drive assembly606to an underside of the turntable102), the second translating cartridge604may be decoupled from the working drive assembly606(or from the second bogie assembly including the working drive assembly606). For example, a second adapter plate and/or one or more second upright bars of the second translating cartridge may be decoupled from mounting points on an underside and/or side surfaces of the working drive assembly606. Thereafter, the second translating cartridge604may be pulled away in the direction B (shown inFIG.5) and caused to tilt in a downward and away direction (e.g., direction C shown inFIG.5) to return to the serviceable orientation. Once in the serviceable orientation, the empty second translating cartridge606may remain empty in anticipation of receiving a future disabled drive assembly or may be re-loaded with a working drive assembly. FIG.8is a flow chart illustrating an exemplary process800for replacing a drive assembly of a turntable load station according to an aspect of the present disclosure. In some examples, the process800may be carried out by a control system of a turntable load station or any suitable apparatus or means for carrying out the functions or algorithm described below. At block802, the control system identifies for replacement a first bogie assembly (e.g., bogie assembly104) detachably coupled to an underside of a movable turntable structure (e.g., turntable102). The first bogie assembly includes a first wheeled drive assembly configured to drive a wheel (e.g., wheel106) to roll on a track (e.g., track108) formed on a top surface of a structural pier (e.g., pier110). In an aspect, the first bogie assembly may be identified for replacement via manual or automated methods including methods based on inspecting a sensor. In an aspect, the wheel is coupled to an underside of the movable turntable structure via the first bogie assembly, wherein the movable turntable structure is caused to rotate about an axis (e.g., axis120) when the first wheeled drive assembly drives the wheel to roll on the track. At block804, the control system moves a first translating cartridge (e.g., translating cartridge404) along a radial side surface of the structural pier (e.g., by sliding along the swap rail602) while in a serviceable orientation (e.g., orientation at position508) to an unloading position underneath the first bogie assembly coupled to the movable turntable structure. At block806, the control system may optionally rotate the movable turntable structure about the axis to move the first bogie assembly to the unloading position above the first translating cartridge. At block808, the control system pivots the first translating cartridge from the serviceable orientation to an upright functional orientation (e.g., orientation at position502) and couples the first translating cartridge to the first wheeled drive assembly while in the upright functional orientation. Thereafter, the control system decouples the first wheeled drive assembly coupled to the first translating cartridge from the first bogie assembly or decouples the first bogie assembly comprising the first wheeled drive assembly coupled to the first translating cartridge from the movable turntable structure and pivots the first translating cartridge from the upright functional orientation to the serviceable orientation while coupled to the first wheeled drive assembly. In an aspect, pivoting the first translating cartridge may include offsetting a weight of the first wheeled drive assembly coupled to the first translating cartridge (e.g., using a counterbalance pivot702) when the first translating cartridge pivots from the upright functional orientation to the serviceable orientation. The control system may further move the first translating cartridge along the radial side surface of the structural pier while in the serviceable orientation and coupled to the first wheeled drive assembly to a position away from the unloading position. At block810, the control system moves a second translating cartridge (e.g., second translating cartridge604) coupled to a second wheeled drive assembly (e.g., second drive assembly606) or a second bogie assembly comprising the second wheeled drive assembly along the radial side surface of the structural pier (e.g., by sliding along the swap rail602) while in the serviceable orientation to a loading position underneath the first bogie assembly decoupled from the first wheeled drive assembly or underneath a location of the movable turntable structure at which the first bogie assembly comprising the first wheeled drive assembly is decoupled from the movable turntable structure. At block812, the control system pivots the second translating cartridge from the serviceable orientation to the upright functional orientation while coupled to the second wheeled drive assembly or the second bogie assembly comprising the second wheeled drive assembly and couples the second wheeled drive assembly to the first bogie assembly or couples the second bogie assembly comprising the second wheeled drive assembly to the underside of the movable turntable structure. In an aspect, pivoting the second translating cartridge includes offsetting a weight of the second wheeled drive assembly coupled to the second translating cartridge (e.g., using a counterbalance pivot) when the second translating cartridge pivots from the serviceable orientation to the upright functional orientation. Thereafter, the control system decouples the second translating cartridge from the second wheeled drive assembly or the second bogie assembly comprising the second wheeled drive assembly and pivots the second translating cartridge from the upright functional orientation to the serviceable orientation while decoupled from the second wheeled drive assembly coupled to the first bogie assembly or while decoupled from the second bogie assembly comprising the second wheeled drive assembly coupled to the underside of the movable turntable structure. The control system may further move the second translating cartridge along the radial side surface of the structural pier while in the serviceable orientation and decoupled from the second drive assembly to a position away from the loading position. Within the present disclosure, the word “exemplary” is used to mean “serving as an example, instance, or illustration.” Any implementation or aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term “aspects” does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation. The term “coupled” is used herein to refer to the direct or indirect coupling between two objects. For example, if object A physically touches object B, and object B touches object C, then objects A and C may still be considered coupled to one another—even if they do not directly physically touch each other. For instance, a first object may be coupled to a second object even though the first object is never directly physically in contact with the second object. One or more of the components, steps, features and/or functions illustrated inFIGS.1-8may be rearranged and/or combined into a single component, step, feature or function or embodied in several components, steps, or functions. Additional elements, components, steps, and/or functions may also be added without departing from novel features disclosed herein. The apparatus, devices, and/or components illustrated inFIGS.1-7may be configured to perform one or more of the methods, features, or steps described herein. The novel algorithms described herein may also be efficiently implemented in software and/or embedded in hardware. It is to be understood that the specific order or hierarchy of steps in the methods disclosed is an illustration of exemplary processes. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the methods may be rearranged. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented unless specifically recited therein. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. A phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover: a; b; c; a and b; a and c; b and c; and a, b and c. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.”
34,230
11857887
Unless otherwise specifically noted, articles depicted in the drawings are not necessarily drawn to scale. DETAILED DESCRIPTION For simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the Figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiment or embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein. It should be understood at the outset that, although exemplary embodiments are illustrated in the figures and described below, the principles of the present disclosure may be implemented using any number of techniques, whether currently known or not. The present disclosure should in no way be limited to the exemplary implementations and techniques illustrated in the drawings and described below. Various terms used throughout the present description may be read and understood as follows, unless the context indicates otherwise: “or” as used throughout is inclusive, as though written “and/or”; singular articles and pronouns as used throughout include their plural forms, and vice versa; similarly, gendered pronouns include their counterpart pronouns so that pronouns should not be understood as limiting anything described herein to use, implementation, performance, etc. by a single gender; “exemplary” should be understood as “illustrative” or “exemplifying” and not necessarily as “preferred” over other embodiments. Further definitions for terms may be set out herein; these may apply to prior and subsequent instances of those terms, as will be understood from a reading of the present description. Reference is made toFIGS.1A and1B, which shows a toy vehicle arrangement10in accordance with an embodiment of the present disclosure. The toy vehicle arrangement10includes a toy vehicle12and a remote control unit14. In some embodiments, the remote control14may be omitted. The toy vehicle12includes a vehicle body16(FIG.1A), at least one motor18(FIG.1B), and a plurality of wheels20. In the example shown inFIG.1A, the vehicle body16includes a lower body portion16a, an upper body portion16b, and a plurality of struts16c,16d,16eand16f(shown inFIG.2) that support the upper body portion16babove the lower body portion16a. The at least one motor18in the present example includes a first motor18aand a second motor18b. The first and second motors18aand18beach have a motor housing21that is mounted to the vehicle body16and a motor output shaft23and are sized to have a selected amount of torque. The plurality of wheels20are rotatably mounted to the vehicle body16. The plurality of wheels includes at least one driven wheel22that is drivable by the at least one motor18. In the present example, all of the wheels20are driven wheels22. The at least one driven wheel22includes at least one flip-over wheel24. In the example shown, there are first and second flip-over wheels24, shown individually at24aand24b, respectively. In the present example, the at least one driven wheel22further includes at least one non-flip-over wheel25, which, in the present example, includes first and second non-flip-over wheels25and25b, respectively. The at least one flip-over wheel24is used to flip the toy vehicle12over from an inverted orientation to an upright orientation, as is described further below. The at least one non-flip-over wheel25, in embodiments in which they are present, is not involved in flipping the toy vehicle12over from the invented orientation to the upright orientation. The toy vehicle12has a first end26and a second end28, and has a length L between the first and second ends26and28. In the present example, the first end26is the front end and the second end28is the rear end, however, it will be understood that the first end26could alternatively be the rear end and the second end28could be the front end. The at least one flip-over wheel24has an axis of rotation A that is closer to the first end26than to the second end28. As shown inFIG.1B, the first motor18ais operatively connected to two of the driven wheels22, namely the first flip-over wheel24a, and to the first non-flip-over wheel25a, via a first torque transfer structure30a, which is a gear train in the embodiment shown. Similarly, the second motor18bis operatively connected to two of the driven wheels22, namely the second flip-over wheel24b, and to the second non-flip-over wheel25b, via a second torque transfer structure30b, which is also a gear train in the embodiment shown. Alternatively, any other suitable torque transfer structure may be provided. A control system is shown at32inFIG.1B. The control system32controls the operation of the at least one motor16. The control system32in the present example includes a printed circuit board34which has a processor36, a memory38, an RF communications chip39, an on-off switch40, a battery42, and a charging port44connected thereto. The processor36carries out instructions which are stored in the memory38. Some of the instructions may be based on signals that are received from the remote control14via the RF communications chip39. Put another way, the remote control14is operable remotely from the toy vehicle12to transmit signals to the toy vehicle12for use by the control system32to control operation of the at least one motor18, which relate to the aforementioned instructions. The instructions may include, for example:an instruction to rotate the motors18aand18bin a forward direction with an amount of torque that varies based on how far the user moves a drive lever46forward on the remote control14;an instruction to rotate the motors18aand18bin a backward direction with an amount of torque that varies based on how far the user moves a drive lever46backward on the remote control14;an instruction to rotate the first motor18ain a forward direction and the second motor18bin a backward direction each with an amount of torque that varies based on how far the user moves a turn lever46to the left on the remote control14; andan instruction to rotate the first motor18ain a backward direction and the second motor18bin a forward direction each with an amount of torque that varies based on how far the user moves a turn lever46to the right on the remote control14. Other instructions may additionally or alternatively be stored in the memory38and may be executed by the processor36. Referring toFIG.1A, the remote control14may be equipped with the following controls to enable the user to send the above noted signals to the toy vehicle: a forward/reverse lever14a, a left/right steering lever14b, and an on/off switch14c. A suitable control system may be provided in the remote control, powered by a suitable power source may be provided, as will be understood by one skilled in the art. The battery42is used to provide power to the motors18. The power transmitted to the motors18may be based on the instructions being carried out by the processor36. The battery42may be a rechargeable battery, which is charged using the charging port44. Alternatively, if the battery42is a non-rechargeable battery, the charging port44may be omitted. The on-off switch40, in the present example, physically controls an electrical connection between the battery42and the other components of the control system32apart from the charging port44. The toy vehicle12has an upright orientation (FIG.2) in which the plurality of wheels20support the vehicle body16above a support surface shown at S, which may be a tabletop, or any other suitable support surface. As can be seen clearly inFIG.2, the vehicle body16extends above the plurality of wheels20when in the upright orientation. This lends some measure of realism to the toy vehicle12, in the sense that typical vehicles, even monster trucks which have large wheels relative to the size of the vehicle body, have a vehicle body that extends above the wheels. During use, it is possible that the toy vehicle12may flip over to an inverted orientation, shown inFIG.3A. In the inverted orientation the vehicle body16at least in part supports the toy vehicle12on the support surface S. Put another way, the vehicle body16has a balance surface arrangement29that at least partially supports the toy vehicle12on the support surface S when the toy vehicle12is in the inverted orientation. The balance surface arrangement29may include a plurality of surface portions, such as are shown at29aand29binFIG.3A. The balance surface arrangement29inFIG.3Aonly in-part supports the toy vehicle12on the support surface S when the toy vehicle12is in the inverted orientation, while the at least one flip-over wheel24also in-part supports the toy vehicle12on the support surface S when the toy vehicle12is in the inverted orientation. In order to permit the user to flip the toy vehicle12back over to the upright orientation from the inverted orientation, the toy vehicle has a centre of gravity CG that is positioned at a selected position. More specifically, the toy vehicle12has the centre of gravity CG positioned, such that, application of the selected amount of torque (shown at TS inFIG.3A) from the at least one motor18to the at least one driven wheel22causes a reaction torque (shown at TR inFIG.3A) in the motor housing21and therefore in the vehicle body16to drive rotation of the vehicle body16about the axis of rotation A from the inverted orientation (FIG.3A) over to the upright orientation (FIG.2) on the support surface S. The selected torque that the at least one motor18is driven with is dependent on many factors including the losses that occur between the at least one motor18and the at least one flip-over wheel24, the position of the centre of gravity CG of the toy vehicle12, the weight of the toy vehicle12, and the radius of the at least one flip-over wheel24. One skilled in the art will be able to determine a suitable selected torque for the at least one motor based on the specifics of a given application. FIGS.3A-3Dillustrate stages in the flipping over of the toy vehicle12from the inverted orientation to the upright orientation shown inFIG.2when the selected amount of torque is applied by the at least one motor18to the at least one driven wheel22. In the embodiment shown inFIG.3A, the selected amount of torque drives the at least one flip-over wheel in the forward direction. InFIG.3B, the reaction torque TR that is exerted on the vehicle body16, resulting from the selected torque applied by the at least one motor18, causes the vehicle body16to rotate about the axis of rotation A, lifting the vehicle body16off of the support surface S. InFIG.3C, the vehicle body16has pivoted to the orientation in which the centre of gravity CG has been elevated to its maximum height. InFIG.3D, the vehicle body16has pivoted past the orientation inFIG.3C, and would therefore fall to its upright orientation (FIG.2) even if the at least one motor18were powered off. By contrast, it is possible to have an embodiment in which the toy vehicle12sits with its rear wheels touching the support surface S and with its centre of gravity rearwardly positioned such that driving the at least one motor18in a backward direction would flip the toy vehicle12from the inverted orientation to the upright orientation. In the embodiment shown inFIG.2, the position of the centre of gravity CG is selected to provide certain features to the toy vehicle12. As can be seen inFIGS.2and3A-3D, the at least one flip-over wheel24has a radius R, and the centre of gravity CG is spaced from the axis of rotation A by less than the radius R. As a result, it is hypothesized that there is some mechanical advantage provided between the torque applied by the support surface S on the at least one flip-over wheel24(so as to resist spinning of the at least one flip-over wheel24on the support surface S during application of torque thereto by the at least one motor18), and the reaction torque that drives the vehicle body16to rotate about the axis of rotation A. In order to position the centre of gravity CG in the selected position, the battery42and the at least one motor18are positioned closer to the first end26than the axis of rotation A is to the first end26. In the embodiment shown inFIG.2, this means that the at least one motor18and the battery42are positioned forward of the axis of rotation A. The battery42and the at least one motor18are shown schematically in dashed lines inFIG.2, as they are hidden in this view by other elements of the toy vehicle12. The at least one motor18and the battery42constitute relatively dense elements of the toy vehicle12. By contrast, other elements of the toy vehicle12including the entirety of the vehicle body16, the gear train, and the hubs of the wheels20may be made from a lightweight polymeric material (apart from a sparing use of small screws used to assemble elements together where the use of polymeric latch members or other connecting means is not convenient. Furthermore, the wheels themselves may be made from a foamed polymer, so as to maintain low weight and may be fixedly mounted to the hubs of the wheels20by any suitable means such as by the use of ribs on the hubs of the wheels20that engage slots (not shown) that are provided in the wheels20, thereby eliminating the need for a strong adhesive to hold the wheels20rotationally on the hubs. The hubs of the wheels20are shown at48inFIG.1A, while the ribs are shown at50and the grooves are shown at52. A feature of the toy vehicle12is that the balance surface arrangement29and the centre of gravity CG may be positioned such that the centre of gravity CG rises by a distance that is less than 25% of the length L of the toy vehicle12during application of the selected amount of torque TS by the at least one motor18to cause the reaction torque TR in the toy vehicle12to drive rotation of the vehicle body16over to the upright orientation. It an example, the toy vehicle12has a length of approximately 9.5 inches and the centre of gravity rises by about 1.5 inches between the inverted orientation shown inFIG.3Aand the orientation of maximum height of the centre of gravity CG shown inFIG.3Cduring flipping over of the toy vehicle12to the upright orientation. InFIG.3C, the height of the centre of gravity (identified as CG1inFIG.3C) when the toy vehicle12was in the inverted orientation is shown at H1, and the height of the centre of gravity CG when the toy vehicle12was in the orientation of maximum height of the centre of gravity CG (i.e. in the position shown inFIG.3C) is shown at H2. The rise is shown at H. Given the rise H shown inFIG.3C, it can be seen that in some embodiments, the rise may be less than about 1.5/9.5 or about 16% of the length of the toy vehicle12. Providing a rise H in the centre of gravity CG that is less than 25% of the length of the toy vehicle12, and more preferably, a rise H that is less than 16% of the length of the toy vehicle12, permits the toy vehicle12to flip over with a relatively low amount of torque, which in turn permits the at least one motor18to be relatively light, thereby reducing the weight of the toy vehicle12. This, in turn, permits a reduction in the size and weight of the battery42, which further reduces the weight of the toy vehicle12and further improves its performance. Reference is made toFIG.4, which shows an alternative embodiment of the toy vehicle12, in which the balance surface arrangement29on the vehicle body16fully supports the toy vehicle12on the support surface S when the toy vehicle12is in the inverted orientation as shown inFIG.4, holding the at least one flip-over wheel24spaced from the support surface S. As shown in the example inFIG.4, the balance surface arrangement includes a first surface portion29a, a second surface portion29band a third surface portion29c, but may alternatively include more or fewer surface portions. In such an embodiment, the application of the selected torque TS by the at least one motor18, which results in the reaction torque TR in the vehicle body16, drives the at least one flip-over wheel24into engagement with the support surface S. In addition to the above, it will be noted that, by positioning the centre of gravity CG towards the front end26of the toy vehicle12, the vehicle12can accelerate forwards with less risk of its front wheels lifting off the support surface S, and less risk of the vehicle12flipping over backwards to the inverted orientation. Although specific advantages have been enumerated above, various embodiments may include some, none, or all of the enumerated advantages. Persons skilled in the art will appreciate that there are yet more alternative implementations and modifications possible, and that the above examples are only illustrations of one or more implementations. The scope, therefore, is only to be limited by the claims appended hereto and any amendments made thereto.
17,199